From thierry.carrez+lp at gmail.com Tue Jun 4 15:04:02 2013 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Tue, 04 Jun 2013 15:04:02 -0000 Subject: [Openstack-security] [Bug 1187107] Re: quantum-ns-metadata-proxy runs as root References: <20130603194234.27198.32504.malonedeb@gac.canonical.com> Message-ID: <20130604150402.5748.81537.malone@chaenomeles.canonical.com> Classifying as a welcome strengthening opportunity ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1187107 Title: quantum-ns-metadata-proxy runs as root Status in OpenStack Quantum (virtual network service): New Bug description: # ps -ef | grep quantum-ns-metadata-proxy root 10239 1 0 19:01 ? 00:00:00 python /usr/bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/7a44de32-3ac0-4f3e-92cc-1a37d8211db8.pid --router_id=7a44de32-3ac0-4f3e-92cc-1a37d8211db8 --state_path=/var/lib/quantum --debug --log-file=quantum-ns-metadata-proxy7a44de32-3ac0-4f3e-92cc-1a37d8211db8.log --log-dir=/var/log/quantum Root is needed to open the namespace, but the quantum-ns-metadata-proxy does not need root - it listens on 9697 by default not 80. I tried changing /etc/quantum/rootwrap.d/l3.filters for it to run as quantum instead: metadata_proxy: CommandFilter, /usr/bin/quantum-ns-metadata-proxy, quantum but it still runs as root. To manage notifications about this bug go to: https://bugs.launchpad.net/quantum/+bug/1187107/+subscriptions From devoid at anl.gov Mon Jun 3 19:26:40 2013 From: devoid at anl.gov (Scott Devoid) Date: Mon, 03 Jun 2013 19:26:40 -0000 Subject: [Openstack-security] [Bug 1187104] [NEW] Implement policy check for object ownership References: <20130603192640.27171.80420.malonedeb@gac.canonical.com> Message-ID: <20130603192640.27171.80420.malonedeb@gac.canonical.com> Public bug reported: As far as I can tell, there is no policy check for resource ownership. The current policy checks support: all, none, role-membership, and tenant-membership. This means that the most minimal policy for an action, e.g. "compute:delete" is "role:Name and tenant_id:%(tenant_id)s". This role would allows any member of a project to delete any instance, which is a problem! We need something like: "owns:%(resource_id)" which checks the "user_id" field associated with the resource? ** Affects: nova Importance: Undecided Status: New ** Tags: ops security -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1187104 Title: Implement policy check for object ownership Status in OpenStack Compute (Nova): New Bug description: As far as I can tell, there is no policy check for resource ownership. The current policy checks support: all, none, role-membership, and tenant-membership. This means that the most minimal policy for an action, e.g. "compute:delete" is "role:Name and tenant_id:%(tenant_id)s". This role would allows any member of a project to delete any instance, which is a problem! We need something like: "owns:%(resource_id)" which checks the "user_id" field associated with the resource? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1187104/+subscriptions From robert.clark at hp.com Thu Jun 6 16:46:16 2013 From: robert.clark at hp.com (Clark, Robert Graham) Date: Thu, 6 Jun 2013 16:46:16 +0000 Subject: [Openstack-security] [OSSG] Appologies Message-ID: All, Sorry I can't make it to the meeting today. If possible, could you discuss additional OSSN's that could be useful please? Thanks -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6187 bytes Desc: not available URL: From 1187107 at bugs.launchpad.net Thu Jun 6 21:29:23 2013 From: 1187107 at bugs.launchpad.net (Mark McClain) Date: Thu, 06 Jun 2013 21:29:23 -0000 Subject: [Openstack-security] [Bug 1187107] Re: quantum-ns-metadata-proxy runs as root References: <20130603194234.27198.32504.malonedeb@gac.canonical.com> Message-ID: <20130606212926.23562.87467.launchpad@wampee.canonical.com> ** Changed in: quantum Importance: Undecided => Medium ** Changed in: quantum Status: New => Triaged ** Tags added: l3-ipam-dhcp -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1187107 Title: quantum-ns-metadata-proxy runs as root Status in OpenStack Quantum (virtual network service): Triaged Bug description: # ps -ef | grep quantum-ns-metadata-proxy root 10239 1 0 19:01 ? 00:00:00 python /usr/bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/7a44de32-3ac0-4f3e-92cc-1a37d8211db8.pid --router_id=7a44de32-3ac0-4f3e-92cc-1a37d8211db8 --state_path=/var/lib/quantum --debug --log-file=quantum-ns-metadata-proxy7a44de32-3ac0-4f3e-92cc-1a37d8211db8.log --log-dir=/var/log/quantum Root is needed to open the namespace, but the quantum-ns-metadata-proxy does not need root - it listens on 9697 by default not 80. I tried changing /etc/quantum/rootwrap.d/l3.filters for it to run as quantum instead: metadata_proxy: CommandFilter, /usr/bin/quantum-ns-metadata-proxy, quantum but it still runs as root. To manage notifications about this bug go to: https://bugs.launchpad.net/quantum/+bug/1187107/+subscriptions From 1129748 at bugs.launchpad.net Fri Jun 7 14:26:46 2013 From: 1129748 at bugs.launchpad.net (OpenStack Hudson) Date: Fri, 07 Jun 2013 14:26:46 -0000 Subject: [Openstack-security] [Bug 1129748] Fix proposed to nova (master) References: <20130219034904.21134.44738.malonedeb@wampee.canonical.com> Message-ID: <20130607142647.7800.35443.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/32146 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1129748 Title: image files in _base should not be world-readable Status in OpenStack Compute (Nova): In Progress Bug description: Already public in https://bugzilla.redhat.com/show_bug.cgi?id=896085 , so probably no point making this private. But I checked the security vulnerability box anyway so someone else can decide. We create image files in /var/lib/nova/instances/_base with default permissions, usually 644. It would be better to not make the image files world-readable, in case they contain private data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1129748/+subscriptions From 1175905 at bugs.launchpad.net Fri Jun 7 20:44:19 2013 From: 1175905 at bugs.launchpad.net (Dolph Mathews) Date: Fri, 07 Jun 2013 20:44:19 -0000 Subject: [Openstack-security] [Bug 1175905] Re: passlib failure to sanitize env variables PASSLIB_MAX_PASSWORD_SIZE References: <20130503065127.14958.89453.malonedeb@gac.canonical.com> Message-ID: <20130607204421.7769.68218.launchpad@gac.canonical.com> ** Changed in: keystone Status: New => Triaged ** Changed in: keystone Importance: Undecided => Medium -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1175905 Title: passlib failure to sanitize env variables PASSLIB_MAX_PASSWORD_SIZE Status in OpenStack Identity (Keystone): Triaged Bug description: Grant Murphy originally reported: * Usage of passlib The keystone server does not appear to sanitize the environment when starting. This means that an unintended value can be set for PASSLIB_MAX_PASSWORD_SIZE. Which will overwrite the default value of 4096 and potentially cause an unhandled passlib.exc.PasswordSizeError. We should ensure sensible defaults are applied here prior to loading passlib. If this is exploitable it will need a CVE, if not we should still harden it so it can't be monkeyed with in the future. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1175905/+subscriptions From 1175904 at bugs.launchpad.net Fri Jun 7 20:45:14 2013 From: 1175904 at bugs.launchpad.net (Dolph Mathews) Date: Fri, 07 Jun 2013 20:45:14 -0000 Subject: [Openstack-security] [Bug 1175904] Re: passlib trunc_password MAX_PASSWORD_LENGTH password truncation References: <20130503065124.14566.73303.malonedeb@gac.canonical.com> Message-ID: <20130607204515.7676.10359.launchpad@gac.canonical.com> ** Changed in: keystone Importance: Undecided => Medium ** Changed in: keystone Status: New => Confirmed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1175904 Title: passlib trunc_password MAX_PASSWORD_LENGTH password truncation Status in OpenStack Identity (Keystone): Confirmed Bug description: Grant Murphy originally reported: * Insecure / bad practice The trunc_password function attempts to correct and truncate passwords that are over the MAX_PASSWORD_LENGTH value (default 4096). As the MAX_PASSWORD_LENGTH field is globally mutable it could be modified to restrict all passwords to length = 1. This scenario might be unlikely but generally speaking we should not try to 'fix' invalid input and continue on processing as if nothing happened. If this is exploitable it will need a CVE, if not we should still harden it so it can't be monkeyed with in the future. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1175904/+subscriptions From Numero.8 at free.fr Sun Jun 9 19:18:45 2013 From: Numero.8 at free.fr (Numero 8) Date: Sun, 09 Jun 2013 19:18:45 -0000 Subject: [Openstack-security] [Bug 1004114] Re: Password logging References: <20120524190215.26515.18198.malonedeb@gac.canonical.com> Message-ID: <20130609191848.13525.56627.launchpad@soybean.canonical.com> ** Changed in: python-keystoneclient Assignee: (unassigned) => Numero 8 (numero-8) -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1004114 Title: Password logging Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in Python client library for Keystone: Triaged Bug description: When the log level is set to DEBUG, keystoneclient's full-request logging mechanism kicks in, exposing plaintext passwords, etc. This bug is mostly out of the scope of Horizon, however Horizon can also be more secure in this regard. We should make sure that wherever we *are* handling sensitive data we use Django's error report filtering mechanisms so they don't appear in tracebacks, etc. (https://docs.djangoproject.com/en/dev/howto/error-reporting /#filtering-error-reports) Keystone may also want to look at respecting such annotations in their logging mechanism, i.e. if Django were properly annotating these data objects, keystoneclient could check for those annotations and properly sanitize the log output. If not this exact mechanism, then something similar would be wise. For the time being, it's also worth documenting in both projects that a log level of DEBUG will log passwords in plain text. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions From 1171662 at bugs.launchpad.net Mon Jun 10 20:05:39 2013 From: 1171662 at bugs.launchpad.net (Dolph Mathews) Date: Mon, 10 Jun 2013 20:05:39 -0000 Subject: [Openstack-security] [Bug 1171662] Re: User transaction events are not logged References: <20130423001733.7402.84350.malonedeb@gac.canonical.com> Message-ID: <20130610200542.7609.21622.launchpad@gac.canonical.com> ** Changed in: keystone Importance: Undecided => Wishlist -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1171662 Title: User transaction events are not logged Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Identity (Keystone): Incomplete Bug description: Authentication transaction like successful login, failed login attempt and profile update should be logged. Authorization failure should be logged as well. For example: if user attempts to access resources that they don't have privilege to. This way the logs can be used for security audit, this is importat for enterprise operators for security compliance. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1171662/+subscriptions From Numero.8 at free.fr Thu Jun 13 19:28:22 2013 From: Numero.8 at free.fr (Numero 8) Date: Thu, 13 Jun 2013 19:28:22 -0000 Subject: [Openstack-security] [Bug 1004114] Re: Password logging References: <20120524190215.26515.18198.malonedeb@gac.canonical.com> Message-ID: <20130613192823.20988.74202.malone@chaenomeles.canonical.com> I'm working on it... Review in progress. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1004114 Title: Password logging Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in Python client library for Keystone: Triaged Bug description: When the log level is set to DEBUG, keystoneclient's full-request logging mechanism kicks in, exposing plaintext passwords, etc. This bug is mostly out of the scope of Horizon, however Horizon can also be more secure in this regard. We should make sure that wherever we *are* handling sensitive data we use Django's error report filtering mechanisms so they don't appear in tracebacks, etc. (https://docs.djangoproject.com/en/dev/howto/error-reporting /#filtering-error-reports) Keystone may also want to look at respecting such annotations in their logging mechanism, i.e. if Django were properly annotating these data objects, keystoneclient could check for those annotations and properly sanitize the log output. If not this exact mechanism, then something similar would be wise. For the time being, it's also worth documenting in both projects that a log level of DEBUG will log passwords in plain text. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions From andrew.laski at rackspace.com Fri Jun 14 17:59:43 2013 From: andrew.laski at rackspace.com (Andrew Laski) Date: Fri, 14 Jun 2013 17:59:43 -0000 Subject: [Openstack-security] [Bug 1187104] Re: Implement policy check for object ownership References: <20130603192640.27171.80420.malonedeb@gac.canonical.com> Message-ID: <20130614175947.23389.65395.launchpad@wampee.canonical.com> ** Changed in: nova Importance: Undecided => Wishlist ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1187104 Title: Implement policy check for object ownership Status in OpenStack Compute (Nova): Invalid Bug description: As far as I can tell, there is no policy check for resource ownership. The current policy checks support: all, none, role-membership, and tenant-membership. This means that the most minimal policy for an action, e.g. "compute:delete" is "role:Name and tenant_id:%(tenant_id)s". This role would allows any member of a project to delete any instance, which is a problem! We need something like: "owns:%(resource_id)" which checks the "user_id" field associated with the resource? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1187104/+subscriptions From andrew.laski at rackspace.com Fri Jun 14 18:14:30 2013 From: andrew.laski at rackspace.com (Andrew Laski) Date: Fri, 14 Jun 2013 18:14:30 -0000 Subject: [Openstack-security] [Bug 1187104] Re: Implement policy check for object ownership References: <20130603192640.27171.80420.malonedeb@gac.canonical.com> Message-ID: <20130614181430.7580.16333.malone@gac.canonical.com> You are correct that there is no 'owns' check, but the policy engine does support checking against arbitrary fields in a 'target'. In a lot(most?) of those checks that occur in the compute/api.py layer, vs the wsgi layer, the target is an instance dict so something like user_id:%(user_id)s would work. Now, that's not universally true so there may be specific checks that could use a more robust target to check against, and I would suggest opening bugs for specific checks in that case. So I marked this as invalid because I think it's a bit general and is somewhat supported. But please open reports for specific policy checks that are too limiting. If you're interested in expanding the policy engine capabilities to support an owns resource that would fall under a blueprint rather than a bug report. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1187104 Title: Implement policy check for object ownership Status in OpenStack Compute (Nova): Invalid Bug description: As far as I can tell, there is no policy check for resource ownership. The current policy checks support: all, none, role-membership, and tenant-membership. This means that the most minimal policy for an action, e.g. "compute:delete" is "role:Name and tenant_id:%(tenant_id)s". This role would allows any member of a project to delete any instance, which is a problem! We need something like: "owns:%(resource_id)" which checks the "user_id" field associated with the resource? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1187104/+subscriptions From Numero.8 at free.fr Sun Jun 16 21:13:32 2013 From: Numero.8 at free.fr (Numero 8) Date: Sun, 16 Jun 2013 21:13:32 -0000 Subject: [Openstack-security] [Bug 1004114] Re: Password logging References: <20120524190215.26515.18198.malonedeb@gac.canonical.com> Message-ID: <20130616211333.7831.84560.malone@gac.canonical.com> Still working on it. See https://review.openstack.org/#/c/32343 . -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1004114 Title: Password logging Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in Python client library for Keystone: Triaged Bug description: When the log level is set to DEBUG, keystoneclient's full-request logging mechanism kicks in, exposing plaintext passwords, etc. This bug is mostly out of the scope of Horizon, however Horizon can also be more secure in this regard. We should make sure that wherever we *are* handling sensitive data we use Django's error report filtering mechanisms so they don't appear in tracebacks, etc. (https://docs.djangoproject.com/en/dev/howto/error-reporting /#filtering-error-reports) Keystone may also want to look at respecting such annotations in their logging mechanism, i.e. if Django were properly annotating these data objects, keystoneclient could check for those annotations and properly sanitize the log output. If not this exact mechanism, then something similar would be wise. For the time being, it's also worth documenting in both projects that a log level of DEBUG will log passwords in plain text. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions From devoid at anl.gov Mon Jun 17 18:09:43 2013 From: devoid at anl.gov (Scott Devoid) Date: Mon, 17 Jun 2013 18:09:43 -0000 Subject: [Openstack-security] [Bug 1187104] Re: Implement policy check for object ownership References: <20130603192640.27171.80420.malonedeb@gac.canonical.com> Message-ID: <20130617180943.3908.6477.malone@soybean.canonical.com> That's reasonable. A few clarification questions; please forgive me if these are dumb, but I'm new to OS. 1. Where is the separation between 'wsgi' and compute/api.py layers? 2. From what I can tell, to get the "openstack.common.policy.GenericCheck" to have an "ownership" check, we'd need to add "owner_id" to the target and make sure "user_id" was in the credentials? "user_id:%(user_id)s" should always return true since target["user_id"] is the user in the credential? 3. Is there someone who has detailed knowledge of the policy stuff? Looking over the code, I'm going to have trouble landing anything without a lay-of-the-land. 4. Would expansions to the policy engine fall under the oslo project? How are changes to both oslo and nova gated? I can already see that nova.policy calls openstack.common.policy.check but in oslo-incubator that function no longer exists. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1187104 Title: Implement policy check for object ownership Status in OpenStack Compute (Nova): Invalid Bug description: As far as I can tell, there is no policy check for resource ownership. The current policy checks support: all, none, role-membership, and tenant-membership. This means that the most minimal policy for an action, e.g. "compute:delete" is "role:Name and tenant_id:%(tenant_id)s". This role would allows any member of a project to delete any instance, which is a problem! We need something like: "owns:%(resource_id)" which checks the "user_id" field associated with the resource? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1187104/+subscriptions From 1004114 at bugs.launchpad.net Tue Jun 18 21:56:05 2013 From: 1004114 at bugs.launchpad.net (OpenStack Hudson) Date: Tue, 18 Jun 2013 21:56:05 -0000 Subject: [Openstack-security] [Bug 1004114] Re: Password logging References: <20120524190215.26515.18198.malonedeb@gac.canonical.com> Message-ID: <20130618215605.29699.17447.malone@wampee.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/33532 ** Changed in: python-keystoneclient Status: Triaged => In Progress -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1004114 Title: Password logging Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in Python client library for Keystone: In Progress Bug description: When the log level is set to DEBUG, keystoneclient's full-request logging mechanism kicks in, exposing plaintext passwords, etc. This bug is mostly out of the scope of Horizon, however Horizon can also be more secure in this regard. We should make sure that wherever we *are* handling sensitive data we use Django's error report filtering mechanisms so they don't appear in tracebacks, etc. (https://docs.djangoproject.com/en/dev/howto/error-reporting /#filtering-error-reports) Keystone may also want to look at respecting such annotations in their logging mechanism, i.e. if Django were properly annotating these data objects, keystoneclient could check for those annotations and properly sanitize the log output. If not this exact mechanism, then something similar would be wise. For the time being, it's also worth documenting in both projects that a log level of DEBUG will log passwords in plain text. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions From 1118441 at bugs.launchpad.net Thu Jun 20 11:20:57 2013 From: 1118441 at bugs.launchpad.net (OpenStack Hudson) Date: Thu, 20 Jun 2013 11:20:57 -0000 Subject: [Openstack-security] [Bug 1118441] Re: Horizon does not implement a browser session timeout References: <20130207150751.16050.96632.malonedeb@soybean.canonical.com> Message-ID: <20130620112057.15089.50644.malone@chaenomeles.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/33802 ** Changed in: horizon Status: Confirmed => In Progress ** Changed in: horizon Assignee: Jesse Pretorius (jesse-pretorius) => Matthias Runge (mrunge) -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1118441 Title: Horizon does not implement a browser session timeout Status in OpenStack Dashboard (Horizon): In Progress Bug description: Horizon does not terminate user sessions (from a browser) after a reasonable period of inactivity. The only timeout is that of keystone's token which is often set to very long periods. The only session timeout implemented by Horizon is Django's SESSION_EXPIRE_AT_BROWSER_CLOSE which closes the session when the browser closes. Due to the nature of what can be done in Horizon (both now and in the future) this could pose significant risk since it enables bystanders to make use of unlocked workstations in order to access sensitive data and do otherwise unauthorised activities on behalf of what some may call a 'careless' end-user. Implementing a reasonable inactive session timeout for Horizon would mitigate this risk. An option to solve this problem could be to include this code: https://github.com/subhranath/django-session-idle-timeout There is some discussion regarding possible solutions here: http://stackoverflow.com/questions/3024153/how-to-expire-session-due- to-inactivity-in-django To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1118441/+subscriptions From gerrit2 at review.openstack.org Fri Jun 21 15:08:28 2013 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 21 Jun 2013 15:08:28 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I9b0dcb7d648ee6809185c71ba457c8a8a6c90d50 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/30973 Log: commit ab0b6fa93efa16eea80a3f4a3817794420d5415f Author: Joel Coffman Date: Fri Jun 21 10:54:05 2013 -0400 Create key manager interface This interface provides a thin wrapper around an underlying key management implementation such as Barbican or a KMIP server. The key manager interface is used by the volume encryption code to retrieve keys for volumes. Implements: blueprint encrypt-cinder-volumes Change-Id: I9b0dcb7d648ee6809185c71ba457c8a8a6c90d50 SecurityImpact From 1118441 at bugs.launchpad.net Fri Jun 21 22:47:58 2013 From: 1118441 at bugs.launchpad.net (OpenStack Hudson) Date: Fri, 21 Jun 2013 22:47:58 -0000 Subject: [Openstack-security] [Bug 1118441] Re: Horizon does not implement a browser session timeout References: <20130207150751.16050.96632.malonedeb@soybean.canonical.com> Message-ID: <20130621224758.4407.60623.malone@gac.canonical.com> Reviewed: https://review.openstack.org/33802 Committed: http://github.com/openstack/horizon/commit/dc7668177a2ef638d9a86e7f6c7f62b075b9592c Submitter: Jenkins Branch: master commit dc7668177a2ef638d9a86e7f6c7f62b075b9592c Author: Matthias Runge Date: Thu Jun 20 12:52:37 2013 +0200 Implement Browser session timeout By default, Horizon just uses session, which expire, when the browser is closed. This implements additionally a session timeout. Change-Id: I140ee2ee37e092036a66d890d920423dfc493fba Fixes: bug 1118441 ** Changed in: horizon Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1118441 Title: Horizon does not implement a browser session timeout Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: Horizon does not terminate user sessions (from a browser) after a reasonable period of inactivity. The only timeout is that of keystone's token which is often set to very long periods. The only session timeout implemented by Horizon is Django's SESSION_EXPIRE_AT_BROWSER_CLOSE which closes the session when the browser closes. Due to the nature of what can be done in Horizon (both now and in the future) this could pose significant risk since it enables bystanders to make use of unlocked workstations in order to access sensitive data and do otherwise unauthorised activities on behalf of what some may call a 'careless' end-user. Implementing a reasonable inactive session timeout for Horizon would mitigate this risk. An option to solve this problem could be to include this code: https://github.com/subhranath/django-session-idle-timeout There is some discussion regarding possible solutions here: http://stackoverflow.com/questions/3024153/how-to-expire-session-due- to-inactivity-in-django To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1118441/+subscriptions From euan.harris at citrix.com Mon Jun 24 14:20:41 2013 From: euan.harris at citrix.com (Euan Harris) Date: Mon, 24 Jun 2013 14:20:41 -0000 Subject: [Openstack-security] [Bug 1074087] Re: XenApi migration driver should use execvp References: <20121101174933.5279.14962.malonedeb@gac.canonical.com> Message-ID: <20130624142044.11396.20556.launchpad@chaenomeles.canonical.com> ** Changed in: nova Assignee: (unassigned) => Euan Harris (euanh) ** Summary changed: - XenApi migration driver should use execvp + XenApi migration driver should not use shell=True with Popen -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1074087 Title: XenApi migration driver should not use shell=True with Popen Status in OpenStack Compute (Nova): Triaged Bug description: The XenApi drivers split a string to create an array for subprocess.Popen, rather than passing an array directly. This invites the potential for command injection / manipulation. There is no clearly valid reason to use string splitting here when arguments can be passed, as elsewhere, directly into Popen. The behavior here is present in current Trunk, Folsom, and Essex. Per Trunk and Folsom, _rsync_vhds calls plugins.utils.subprocess to perform the splitting. In Essex, this behaviorism was present directly in migration/transfer_vhd.py, rather than in utils.py. Earlier releases have not been evaluated. I am not certain if this is directly exploitable. The user field is inserted into the generated strings used for command-line execution, and it does seem that Keystone allows usernames to contain arbitrary tokens/characters such as spaces. It is not clear to me if the user field directly matches that in Keystone, if the user field is otherwise validated in the API, etc. Other fields inserted into the command string seem to be internally generated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1074087/+subscriptions From euan.harris at citrix.com Mon Jun 24 14:37:04 2013 From: euan.harris at citrix.com (Euan Harris) Date: Mon, 24 Jun 2013 14:37:04 -0000 Subject: [Openstack-security] [Bug 1074087] Re: XenApi migration driver should not use shell=True with Popen References: <20121101174933.5279.14962.malonedeb@gac.canonical.com> Message-ID: <20130624143704.4345.71205.malone@gac.canonical.com> Changing the summary - xenhost can't use execvp because it needs to process and return the output of the xe processes that it spawns. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1074087 Title: XenApi migration driver should not use shell=True with Popen Status in OpenStack Compute (Nova): Triaged Bug description: The XenApi drivers split a string to create an array for subprocess.Popen, rather than passing an array directly. This invites the potential for command injection / manipulation. There is no clearly valid reason to use string splitting here when arguments can be passed, as elsewhere, directly into Popen. The behavior here is present in current Trunk, Folsom, and Essex. Per Trunk and Folsom, _rsync_vhds calls plugins.utils.subprocess to perform the splitting. In Essex, this behaviorism was present directly in migration/transfer_vhd.py, rather than in utils.py. Earlier releases have not been evaluated. I am not certain if this is directly exploitable. The user field is inserted into the generated strings used for command-line execution, and it does seem that Keystone allows usernames to contain arbitrary tokens/characters such as spaces. It is not clear to me if the user field directly matches that in Keystone, if the user field is otherwise validated in the API, etc. Other fields inserted into the command string seem to be internally generated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1074087/+subscriptions From euan.harris at citrix.com Mon Jun 24 16:05:37 2013 From: euan.harris at citrix.com (Euan Harris) Date: Mon, 24 Jun 2013 16:05:37 -0000 Subject: [Openstack-security] [Bug 1074087] Re: XenApi migration driver should not use shell=True with Popen References: <20121101174933.5279.14962.malonedeb@gac.canonical.com> Message-ID: <20130624160539.4314.44826.launchpad@gac.canonical.com> ** Changed in: nova Status: Triaged => In Progress -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1074087 Title: XenApi migration driver should not use shell=True with Popen Status in OpenStack Compute (Nova): In Progress Bug description: The XenApi drivers split a string to create an array for subprocess.Popen, rather than passing an array directly. This invites the potential for command injection / manipulation. There is no clearly valid reason to use string splitting here when arguments can be passed, as elsewhere, directly into Popen. The behavior here is present in current Trunk, Folsom, and Essex. Per Trunk and Folsom, _rsync_vhds calls plugins.utils.subprocess to perform the splitting. In Essex, this behaviorism was present directly in migration/transfer_vhd.py, rather than in utils.py. Earlier releases have not been evaluated. I am not certain if this is directly exploitable. The user field is inserted into the generated strings used for command-line execution, and it does seem that Keystone allows usernames to contain arbitrary tokens/characters such as spaces. It is not clear to me if the user field directly matches that in Keystone, if the user field is otherwise validated in the API, etc. Other fields inserted into the command string seem to be internally generated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1074087/+subscriptions From 1074087 at bugs.launchpad.net Wed Jun 26 16:33:47 2013 From: 1074087 at bugs.launchpad.net (OpenStack Hudson) Date: Wed, 26 Jun 2013 16:33:47 -0000 Subject: [Openstack-security] [Bug 1074087] Fix proposed to nova (master) References: <20121101174933.5279.14962.malonedeb@gac.canonical.com> Message-ID: <20130626163347.11418.88447.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/34580 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1074087 Title: XenApi migration driver should not use shell=True with Popen Status in OpenStack Compute (Nova): In Progress Bug description: The XenApi drivers split a string to create an array for subprocess.Popen, rather than passing an array directly. This invites the potential for command injection / manipulation. There is no clearly valid reason to use string splitting here when arguments can be passed, as elsewhere, directly into Popen. The behavior here is present in current Trunk, Folsom, and Essex. Per Trunk and Folsom, _rsync_vhds calls plugins.utils.subprocess to perform the splitting. In Essex, this behaviorism was present directly in migration/transfer_vhd.py, rather than in utils.py. Earlier releases have not been evaluated. I am not certain if this is directly exploitable. The user field is inserted into the generated strings used for command-line execution, and it does seem that Keystone allows usernames to contain arbitrary tokens/characters such as spaces. It is not clear to me if the user field directly matches that in Keystone, if the user field is otherwise validated in the API, etc. Other fields inserted into the command string seem to be internally generated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1074087/+subscriptions From SHULMANA at il.ibm.com Thu Jun 27 13:04:35 2013 From: SHULMANA at il.ibm.com (Alexandra Shulman-Peleg) Date: Thu, 27 Jun 2013 16:04:35 +0300 Subject: [Openstack-security] AUTO: Alexandra Shulman-Peleg is out of the office. (returning 07/07/2013) Message-ID: I am out of the office until 07/07/2013. I am out of office. If required, please contact my manager Dalit Naor (dalit at il.ibm.com). For matters related to the VISION project, please contact Hillel Kolodner (kolodner at il.ibm.com). Note: This is an automated response to your message "Openstack-security Digest, Vol 4, Issue 12" sent on 27/06/2013 15:00:01. This is the only notification you will receive while this person is away. From nathanael.i.burton.work at gmail.com Thu Jun 27 19:09:34 2013 From: nathanael.i.burton.work at gmail.com (Nathanael Burton) Date: Thu, 27 Jun 2013 19:09:34 -0000 Subject: [Openstack-security] [Bug 1195431] [NEW] kombu_ssl_version is a cfg.StrOpt but the ssl socket code requires an Integer value References: <20130627190934.25966.77326.malonedeb@soybean.canonical.com> Message-ID: <20130627190934.25966.77326.malonedeb@soybean.canonical.com> Public bug reported: When specifying 'kombu_ssl_version' for the RPC driver such as either "kombu_ssl_version=3" or "kombu_ssl_version=SSLv3" the relevant OpenStack service (nova, cinder, etc) will fail with the following traceback: 2013-06-27 15:05:30.257 CRITICAL cinder [-] an integer is required 2013-06-27 15:05:30.257 TRACE cinder Traceback (most recent call last): 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/bin/cinder-scheduler", line 50, in 2013-06-27 15:05:30.257 TRACE cinder service.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 624, in wait 2013-06-27 15:05:30.257 TRACE cinder _launcher.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 135, in wait 2013-06-27 15:05:30.257 TRACE cinder service.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait 2013-06-27 15:05:30.257 TRACE cinder return self._exit_event.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait 2013-06-27 15:05:30.257 TRACE cinder return hubs.get_hub().switch() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch 2013-06-27 15:05:30.257 TRACE cinder return self.greenlet.switch() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main 2013-06-27 15:05:30.257 TRACE cinder result = function(*args, **kwargs) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 96, in run_server 2013-06-27 15:05:30.257 TRACE cinder server.start() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 359, in start 2013-06-27 15:05:30.257 TRACE cinder self.manager.init_host() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/scheduler/manager.py", line 62, in init_host 2013-06-27 15:05:30.257 TRACE cinder self.request_service_capabilities(ctxt) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/scheduler/manager.py", line 141, in request_service_capabilities 2013-06-27 15:05:30.257 TRACE cinder volume_rpcapi.VolumeAPI().publish_service_capabilities(context) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/volume/rpcapi.py", line 133, in publish_service_capabilities 2013-06-27 15:05:30.257 TRACE cinder version='1.2') 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/proxy.py", line 142, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder rpc.fanout_cast(context, self._get_topic(topic), msg) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/__init__.py", line 179, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder return _get_impl().fanout_cast(CONF, context, topic, msg) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 812, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder rpc_amqp.get_connection_pool(conf, Connection)) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/amqp.py", line 635, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder with ConnectionContext(conf, connection_pool) as conn: 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/amqp.py", line 122, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self.connection = connection_pool.get() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/pools.py", line 119, in get 2013-06-27 15:05:30.257 TRACE cinder created = self.create() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/amqp.py", line 76, in create 2013-06-27 15:05:30.257 TRACE cinder return self.connection_cls(self.conf) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 447, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self.reconnect() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 519, in reconnect 2013-06-27 15:05:30.257 TRACE cinder self._connect(params) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 495, in _connect 2013-06-27 15:05:30.257 TRACE cinder self.connection.connect() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/connection.py", line 246, in connect 2013-06-27 15:05:30.257 TRACE cinder return self.connection 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/connection.py", line 761, in connection 2013-06-27 15:05:30.257 TRACE cinder self._connection = self._establish_connection() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/connection.py", line 720, in _establish_connection 2013-06-27 15:05:30.257 TRACE cinder conn = self.transport.establish_connection() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/transport/pyamqp.py", line 110, in establish_connection 2013-06-27 15:05:30.257 TRACE cinder **conninfo.transport_options or {}) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/connection.py", line 136, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self.transport = create_transport(host, connect_timeout, ssl) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 252, in create_transport 2013-06-27 15:05:30.257 TRACE cinder return SSLTransport(host, connect_timeout, ssl) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 170, in __init__ 2013-06-27 15:05:30.257 TRACE cinder super(SSLTransport, self).__init__(host, connect_timeout) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 105, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self._setup_transport() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 178, in _setup_transport 2013-06-27 15:05:30.257 TRACE cinder self.sslobj = ssl.wrap_socket(self.sock, **self.sslopts) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 288, in wrap_socket 2013-06-27 15:05:30.257 TRACE cinder return GreenSSLSocket(sock, *a, **kw) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 46, in __init__ 2013-06-27 15:05:30.257 TRACE cinder super(GreenSSLSocket, self).__init__(sock.fd, *args, **kw) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/ssl.py", line 197, in __init__ 2013-06-27 15:05:30.257 TRACE cinder ciphers) 2013-06-27 15:05:30.257 TRACE cinder TypeError: an integer is required This is because the underlying rpc driver is trying to create an SSL socket which requires an integer such as the following built-in SSL integer constants: PROTOCOL_SSLv2 PROTOCOL_SSLv3 PROTOCOL_SSLv23 PROTOCOL_TLSv1 ** Affects: oslo Importance: Undecided Status: New ** Tags: security ssl -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1195431 Title: kombu_ssl_version is a cfg.StrOpt but the ssl socket code requires an Integer value Status in Oslo - a Library of Common OpenStack Code: New Bug description: When specifying 'kombu_ssl_version' for the RPC driver such as either "kombu_ssl_version=3" or "kombu_ssl_version=SSLv3" the relevant OpenStack service (nova, cinder, etc) will fail with the following traceback: 2013-06-27 15:05:30.257 CRITICAL cinder [-] an integer is required 2013-06-27 15:05:30.257 TRACE cinder Traceback (most recent call last): 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/bin/cinder-scheduler", line 50, in 2013-06-27 15:05:30.257 TRACE cinder service.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 624, in wait 2013-06-27 15:05:30.257 TRACE cinder _launcher.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 135, in wait 2013-06-27 15:05:30.257 TRACE cinder service.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait 2013-06-27 15:05:30.257 TRACE cinder return self._exit_event.wait() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait 2013-06-27 15:05:30.257 TRACE cinder return hubs.get_hub().switch() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch 2013-06-27 15:05:30.257 TRACE cinder return self.greenlet.switch() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main 2013-06-27 15:05:30.257 TRACE cinder result = function(*args, **kwargs) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 96, in run_server 2013-06-27 15:05:30.257 TRACE cinder server.start() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/service.py", line 359, in start 2013-06-27 15:05:30.257 TRACE cinder self.manager.init_host() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/scheduler/manager.py", line 62, in init_host 2013-06-27 15:05:30.257 TRACE cinder self.request_service_capabilities(ctxt) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/scheduler/manager.py", line 141, in request_service_capabilities 2013-06-27 15:05:30.257 TRACE cinder volume_rpcapi.VolumeAPI().publish_service_capabilities(context) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/volume/rpcapi.py", line 133, in publish_service_capabilities 2013-06-27 15:05:30.257 TRACE cinder version='1.2') 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/proxy.py", line 142, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder rpc.fanout_cast(context, self._get_topic(topic), msg) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/__init__.py", line 179, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder return _get_impl().fanout_cast(CONF, context, topic, msg) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 812, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder rpc_amqp.get_connection_pool(conf, Connection)) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/amqp.py", line 635, in fanout_cast 2013-06-27 15:05:30.257 TRACE cinder with ConnectionContext(conf, connection_pool) as conn: 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/amqp.py", line 122, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self.connection = connection_pool.get() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/pools.py", line 119, in get 2013-06-27 15:05:30.257 TRACE cinder created = self.create() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/amqp.py", line 76, in create 2013-06-27 15:05:30.257 TRACE cinder return self.connection_cls(self.conf) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 447, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self.reconnect() 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 519, in reconnect 2013-06-27 15:05:30.257 TRACE cinder self._connect(params) 2013-06-27 15:05:30.257 TRACE cinder File "/opt/stack/cinder/cinder/openstack/common/rpc/impl_kombu.py", line 495, in _connect 2013-06-27 15:05:30.257 TRACE cinder self.connection.connect() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/connection.py", line 246, in connect 2013-06-27 15:05:30.257 TRACE cinder return self.connection 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/connection.py", line 761, in connection 2013-06-27 15:05:30.257 TRACE cinder self._connection = self._establish_connection() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/connection.py", line 720, in _establish_connection 2013-06-27 15:05:30.257 TRACE cinder conn = self.transport.establish_connection() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/kombu-2.5.11-py2.7.egg/kombu/transport/pyamqp.py", line 110, in establish_connection 2013-06-27 15:05:30.257 TRACE cinder **conninfo.transport_options or {}) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/connection.py", line 136, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self.transport = create_transport(host, connect_timeout, ssl) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 252, in create_transport 2013-06-27 15:05:30.257 TRACE cinder return SSLTransport(host, connect_timeout, ssl) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 170, in __init__ 2013-06-27 15:05:30.257 TRACE cinder super(SSLTransport, self).__init__(host, connect_timeout) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 105, in __init__ 2013-06-27 15:05:30.257 TRACE cinder self._setup_transport() 2013-06-27 15:05:30.257 TRACE cinder File "/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/transport.py", line 178, in _setup_transport 2013-06-27 15:05:30.257 TRACE cinder self.sslobj = ssl.wrap_socket(self.sock, **self.sslopts) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 288, in wrap_socket 2013-06-27 15:05:30.257 TRACE cinder return GreenSSLSocket(sock, *a, **kw) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 46, in __init__ 2013-06-27 15:05:30.257 TRACE cinder super(GreenSSLSocket, self).__init__(sock.fd, *args, **kw) 2013-06-27 15:05:30.257 TRACE cinder File "/usr/lib/python2.7/ssl.py", line 197, in __init__ 2013-06-27 15:05:30.257 TRACE cinder ciphers) 2013-06-27 15:05:30.257 TRACE cinder TypeError: an integer is required This is because the underlying rpc driver is trying to create an SSL socket which requires an integer such as the following built-in SSL integer constants: PROTOCOL_SSLv2 PROTOCOL_SSLv3 PROTOCOL_SSLv23 PROTOCOL_TLSv1 To manage notifications about this bug go to: https://bugs.launchpad.net/oslo/+bug/1195431/+subscriptions From gerrit2 at review.openstack.org Fri Jun 28 17:00:37 2013 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 28 Jun 2013 17:00:37 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I9b0dcb7d648ee6809185c71ba457c8a8a6c90d50 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/30973 Log: commit 95649f5ad28fa3fad6950964f93abc2795bbf7bf Author: Joel Coffman Date: Fri Jun 21 10:54:05 2013 -0400 Create key manager interface This interface provides a thin wrapper around an underlying key management implementation such as Barbican or a KMIP server. The key manager interface is used by the volume encryption code to retrieve keys for volumes. Implements: blueprint encrypt-cinder-volumes Change-Id: I9b0dcb7d648ee6809185c71ba457c8a8a6c90d50 SecurityImpact From Numero.8 at free.fr Sat Jun 29 17:53:42 2013 From: Numero.8 at free.fr (Numero 8) Date: Sat, 29 Jun 2013 17:53:42 -0000 Subject: [Openstack-security] [Bug 1004114] Re: Password logging References: <20120524190215.26515.18198.malonedeb@gac.canonical.com> Message-ID: <20130629175342.20516.22215.malone@chaenomeles.canonical.com> Waiting for a feedback following last code review. See https://review.openstack.org/#/c/33532/ -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1004114 Title: Password logging Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in Python client library for Keystone: In Progress Bug description: When the log level is set to DEBUG, keystoneclient's full-request logging mechanism kicks in, exposing plaintext passwords, etc. This bug is mostly out of the scope of Horizon, however Horizon can also be more secure in this regard. We should make sure that wherever we *are* handling sensitive data we use Django's error report filtering mechanisms so they don't appear in tracebacks, etc. (https://docs.djangoproject.com/en/dev/howto/error-reporting /#filtering-error-reports) Keystone may also want to look at respecting such annotations in their logging mechanism, i.e. if Django were properly annotating these data objects, keystoneclient could check for those annotations and properly sanitize the log output. If not this exact mechanism, then something similar would be wise. For the time being, it's also worth documenting in both projects that a log level of DEBUG will log passwords in plain text. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions