From 1611171 at bugs.launchpad.net Wed Nov 1 12:17:53 2017 From: 1611171 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 01 Nov 2017 12:17:53 -0000 Subject: [Openstack-security] [Bug 1611171] Re: re-runs self via sudo References: <20160809015520.22289.87995.malonedeb@soybean.canonical.com> Message-ID: <150953867377.8592.8215625363549726725.malone@wampee.canonical.com> Reviewed: https://review.openstack.org/513665 Committed: https://git.openstack.org/cgit/openstack/designate/commit/?id=440a67cab18e3ab725383d01b4ed26fa3b1d3da0 Submitter: Zuul Branch: master commit 440a67cab18e3ab725383d01b4ed26fa3b1d3da0 Author: Jens Harbott Date: Fri Oct 20 08:34:18 2017 +0000 Don't attempt to escalate designate-manage privileges Remove code which allowed designate-manage to attempt to escalate privileges so that configuration files can be read by users who normally wouldn't have access, but do have sudo access. Simpler version of [1]. [1] I03063d2af14015e6506f1b6e958f5ff219aa4a87 Closes-Bug: 1611171 Change-Id: I013754da27e9dd13493bee1abfada3fbc2a004c0 ** Changed in: designate Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1611171 Title: re-runs self via sudo Status in Cinder: Fix Released Status in Designate: Fix Released Status in ec2-api: Fix Released Status in gce-api: Fix Released Status in Manila: In Progress Status in masakari: Fix Released Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Status in Rally: Fix Released Bug description: Hello, I'm looking through Designate source code to determine if is appropriate to include in Ubuntu Main. This isn't a full security audit. This looks like trouble: ./designate/cmd/manage.py def main(): CONF.register_cli_opt(category_opt) try: utils.read_config('designate', sys.argv) logging.setup(CONF, 'designate') except cfg.ConfigFilesNotFoundError: cfgfile = CONF.config_file[-1] if CONF.config_file else None if cfgfile and not os.access(cfgfile, os.R_OK): st = os.stat(cfgfile) print(_("Could not read %s. Re-running with sudo") % cfgfile) try: os.execvp('sudo', ['sudo', '-u', '#%s' % st.st_uid] + sys.argv) except Exception: print(_('sudo failed, continuing as if nothing happened')) print(_('Please re-run designate-manage as root.')) sys.exit(2) This is an interesting decision -- if the configuration file is _not_ readable by the user in question, give the executing user complete privileges of the user that owns the unreadable file. I'm not a fan of hiding privilege escalation / modifications in programs -- if a user had recently used sudo and thus had the authentication token already stored for their terminal, this 'hidden' use of sudo may be unexpected and unwelcome, especially since it appears that argv from the first call leaks through to the sudo call. Is this intentional OpenStack style? Or unexpected for you guys too? (Feel free to make this public at your convenience.) Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1611171/+subscriptions From 1713783 at bugs.launchpad.net Thu Nov 9 13:42:54 2017 From: 1713783 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 09 Nov 2017 13:42:54 -0000 Subject: [Openstack-security] [Bug 1713783] Re: After failed evacuation the recovered source compute tries to delete the instance References: <150402703560.19393.11649471063519714290.malonedeb@chaenomeles.canonical.com> Message-ID: <151023497495.26940.8374059648035209095.malone@soybean.canonical.com> Fix proposed to branch: stable/ocata Review: https://review.openstack.org/518733 ** Changed in: nova/ocata Status: Triaged => In Progress ** Changed in: nova/ocata Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer) -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1713783 Title: After failed evacuation the recovered source compute tries to delete the instance Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Triaged Status in OpenStack Compute (nova) ocata series: In Progress Status in OpenStack Compute (nova) pike series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== In case of a failed evacuation attempt the status of the migration is 'accepted' instead of 'failed' so when source compute is recovered the compute manager tries to delete the instance from the source host. However a secondary fault prevents deleting the allocation in placement so the actual deletion of the instance fails as well. Steps to reproduce ================== The following functional test reproduces the bug: https://review.openstack.org/#/c/498482/ What it does: initiate evacuation when no valid host is available and evacuation fails, but nova manager still tries to delete the instance. Logs:     2017-08-29 19:11:15,751 ERROR [oslo_messaging.rpc.server] Exception during message handling     NoValidHost: No valid host was found. There are not enough hosts available.     2017-08-29 19:11:16,103 INFO [nova.tests.functional.test_servers] Running periodic for compute1 (host1)     2017-08-29 19:11:16,115 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,120 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,131 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/allocations" status: 200 len: 152 microversion: 1.0     2017-08-29 19:11:16,138 INFO [nova.compute.resource_tracker] Final resource view: name=host1 phys_ram=8192MB used_ram=1024MB phys_disk=1028GB used_disk=1GB total_vcpus=10 used_vcpus=1 pci_stats=[]     2017-08-29 19:11:16,146 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,151 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,152 INFO [nova.tests.functional.test_servers] Running periodic for compute2 (host2)     2017-08-29 19:11:16,163 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,168 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,176 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/allocations" status: 200 len: 54 microversion: 1.0     2017-08-29 19:11:16,184 INFO [nova.compute.resource_tracker] Final resource view: name=host2 phys_ram=8192MB used_ram=512MB phys_disk=1028GB used_disk=0GB total_vcpus=10 used_vcpus=0 pci_stats=[]     2017-08-29 19:11:16,192 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,197 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,198 INFO [nova.tests.functional.test_servers] Finished with periodics     2017-08-29 19:11:16,255 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/servers/5058200c-478e-4449-88c1-906fdd572662" status: 200 len: 1875 microversion: 2.53 time: 0.056198     2017-08-29 19:11:16,262 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/os-migrations" status: 200 len: 373 microversion: 2.53 time: 0.004618     2017-08-29 19:11:16,280 INFO [nova.api.openstack.requestlog] 127.0.0.1 "PUT /v2.1/6f70656e737461636b20342065766572/os-services/c269bc74-4720-4de4-a6e5-889080b892a0" status: 200 len: 245 microversion: 2.53 time: 0.016442     2017-08-29 19:11:16,281 INFO [nova.service] Starting compute node (version 16.0.0)     2017-08-29 19:11:16,296 INFO [nova.compute.manager] Deleting instance as it has been evacuated from this host To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1713783/+subscriptions From gerrit2 at review.openstack.org Mon Nov 13 18:02:27 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 13 Nov 2017 18:02:27 +0000 Subject: [Openstack-security] [openstack/castellan] SecurityImpact review request change openstack%2Fcastellan~master~Ia5316490201c33e23a4206838d5a4fb3dd00f527 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/514734 Log: commit 9a6cde18edf4bf7d3017cc938bd6813d8eab1afd Author: Alan Bishop Date: Tue Oct 24 14:53:36 2017 +0000 WIP: Support handling legacy all-zeros key ID This WIP is meant to solicit community feedback. If the approach is deemed acceptible, I'll update the patch with unit tests and a new commit message. This patch addresses a specific use case, where a user has encrypted volumes based on the fixed_key used by Cinder's and Nova's ConfKeyManager. The user wishes to switch to Barbican, but existing volumes must continue to function during the migration period. The code conditionally adds a shim around the backend KeyManager when both of these conditions are met: 1) The configuration contains a fixed_key value. This essentially signals the ConfKeyManager has been in use at one time 2) The current backend is *not* the ConfKeyManager When the shim is active, a MigrationKeyManager class is dynamically created that extends the backend's KeyManager class. The MigrationKeyManager exists solely to override two functions: o The KeyManager.get() function detects requests for the secret associated with the fixed_key, which is identified by an all-zeros key ID. - Requests for the all-zeros key ID are handled by mimicing the ConfKeyManager's response, which is a secret derived from the fixed_key. - Requests for any other key ID are passed on to the real backend. o The KeyManager.delete() function is similar: - Requests to delete the all-zeros key ID are essentially ignored, just as is done by the ConfKeyManager. - Requests to delete any other key ID are passed on to the real backend. All other KeyManager functions are not overridden, and will therefore be handled directly by the real backend. SecurityImpact Change-Id: Ia5316490201c33e23a4206838d5a4fb3dd00f527 From gerrit2 at review.openstack.org Tue Nov 14 15:13:26 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 14 Nov 2017 15:13:26 +0000 Subject: [Openstack-security] [openstack/castellan] SecurityImpact review request change openstack%2Fcastellan~master~Ia5316490201c33e23a4206838d5a4fb3dd00f527 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/514734 Log: commit 636ee2ef56ade23af041ecfbeb8f581e90f10182 Author: Alan Bishop Date: Tue Oct 24 14:53:36 2017 +0000 Support handling legacy all-zeros key ID This patch addresses a specific use case, where a user has encrypted volumes based on the fixed_key used by Cinder's and Nova's ConfKeyManager. The user wishes to switch to Barbican, but existing volumes must continue to function during the migration period. The code conditionally adds a shim around the backend KeyManager when both of these conditions are met: 1) The configuration contains a fixed_key value. This essentially signals the ConfKeyManager has been in use at one time 2) The current backend is *not* the ConfKeyManager When the shim is active, a MigrationKeyManager class is dynamically created that extends the backend's KeyManager class. The MigrationKeyManager exists solely to override two functions: o The KeyManager.get() function detects requests for the secret associated with the fixed_key, which is identified by an all-zeros key ID. - Requests for the all-zeros key ID are handled by mimicing the ConfKeyManager's response, which is a secret derived from the fixed_key. - Requests for any other key ID are passed on to the real backend. o The KeyManager.delete() function is similar: - Requests to delete the all-zeros key ID are essentially ignored, just as is done by the ConfKeyManager. - Requests to delete any other key ID are passed on to the real backend. All other KeyManager functions are not overridden, and will therefore be handled directly by the real backend. SecurityImpact Change-Id: Ia5316490201c33e23a4206838d5a4fb3dd00f527 From gerrit2 at review.openstack.org Tue Nov 14 18:04:54 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 14 Nov 2017 18:04:54 +0000 Subject: [Openstack-security] [openstack/castellan] SecurityImpact review request change openstack%2Fcastellan~master~Ia5316490201c33e23a4206838d5a4fb3dd00f527 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/514734 Log: commit d75726f09ac04f8252a6187ba861618e9a9cbc5f Author: Alan Bishop Date: Tue Oct 24 14:53:36 2017 +0000 Support handling legacy all-zeros key ID This patch addresses a specific use case, where a user has encrypted volumes based on the fixed_key used by Cinder's and Nova's ConfKeyManager. The user wishes to switch to Barbican, but existing volumes must continue to function during the migration period. The code conditionally adds a shim around the backend KeyManager when both of these conditions are met: 1) The configuration contains a fixed_key value. This essentially signals the ConfKeyManager has been in use at one time 2) The current backend is *not* the ConfKeyManager When the shim is active, a MigrationKeyManager class is dynamically created that extends the backend's KeyManager class. The MigrationKeyManager exists solely to override two functions: o The KeyManager.get() function detects requests for the secret associated with the fixed_key, which is identified by an all-zeros key ID. - Requests for the all-zeros key ID are handled by mimicing the ConfKeyManager's response, which is a secret derived from the fixed_key. - Requests for any other key ID are passed on to the real backend. o The KeyManager.delete() function is similar: - Requests to delete the all-zeros key ID are essentially ignored, just as is done by the ConfKeyManager. - Requests to delete any other key ID are passed on to the real backend. All other KeyManager functions are not overridden, and will therefore be handled directly by the real backend. SecurityImpact Change-Id: Ia5316490201c33e23a4206838d5a4fb3dd00f527 From gerrit2 at review.openstack.org Tue Nov 14 19:15:04 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 14 Nov 2017 19:15:04 +0000 Subject: [Openstack-security] [openstack/castellan] SecurityImpact review request change openstack%2Fcastellan~master~Ia5316490201c33e23a4206838d5a4fb3dd00f527 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/514734 Log: commit 5ba0837b3b3cb20ee0e31604ee97e3c4a284990f Author: Alan Bishop Date: Tue Oct 24 14:53:36 2017 +0000 Support handling legacy all-zeros key ID This patch addresses a specific use case, where a user has encrypted volumes based on the fixed_key used by Cinder's and Nova's ConfKeyManager. The user wishes to switch to Barbican, but existing volumes must continue to function during the migration period. The code conditionally adds a shim around the backend KeyManager when both of these conditions are met: 1) The configuration contains a fixed_key value. This essentially signals the ConfKeyManager has been in use at one time 2) The current backend is *not* the ConfKeyManager When the shim is active, a MigrationKeyManager class is dynamically created that extends the backend's KeyManager class. The MigrationKeyManager exists solely to override two functions: o The KeyManager.get() function detects requests for the secret associated with the fixed_key, which is identified by an all-zeros key ID. - Requests for the all-zeros key ID are handled by mimicing the ConfKeyManager's response, which is a secret derived from the fixed_key. - Requests for any other key ID are passed on to the real backend. o The KeyManager.delete() function is similar: - Requests to delete the all-zeros key ID are essentially ignored, just as is done by the ConfKeyManager. - Requests to delete any other key ID are passed on to the real backend. All other KeyManager functions are not overridden, and will therefore be handled directly by the real backend. SecurityImpact Change-Id: Ia5316490201c33e23a4206838d5a4fb3dd00f527 From gerrit2 at review.openstack.org Tue Nov 14 21:04:11 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 14 Nov 2017 21:04:11 +0000 Subject: [Openstack-security] [openstack/castellan] SecurityImpact review request change openstack%2Fcastellan~master~Ia5316490201c33e23a4206838d5a4fb3dd00f527 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/514734 Log: commit b37c5ad35ee09f79fbb8f4ba180ed9aebc031107 Author: Alan Bishop Date: Tue Oct 24 14:53:36 2017 +0000 Support handling legacy all-zeros key ID This patch addresses a specific use case, where a user has encrypted volumes based on the fixed_key used by Cinder's and Nova's ConfKeyManager. The user wishes to switch to Barbican, but existing volumes must continue to function during the migration period. The code conditionally adds a shim around the backend KeyManager when both of these conditions are met: 1) The configuration contains a fixed_key value. This essentially signals the ConfKeyManager has been in use at one time 2) The current backend is *not* the ConfKeyManager When the shim is active, a MigrationKeyManager class is dynamically created that extends the backend's KeyManager class. The MigrationKeyManager exists solely to override two functions: o The KeyManager.get() function detects requests for the secret associated with the fixed_key, which is identified by an all-zeros key ID. - Requests for the all-zeros key ID are handled by mimicing the ConfKeyManager's response, which is a secret derived from the fixed_key. - Requests for any other key ID are passed on to the real backend. o The KeyManager.delete() function is similar: - Requests to delete the all-zeros key ID are essentially ignored, just as is done by the ConfKeyManager. - Requests to delete any other key ID are passed on to the real backend. All other KeyManager functions are not overridden, and will therefore be handled directly by the real backend. SecurityImpact Change-Id: Ia5316490201c33e23a4206838d5a4fb3dd00f527 From 1708122 at bugs.launchpad.net Wed Nov 15 00:50:13 2017 From: 1708122 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 15 Nov 2017 00:50:13 -0000 Subject: [Openstack-security] [Bug 1708122] Re: Don't return back the sensitive information to user References: <150166534782.4172.13585998432019727052.malonedeb@gac.canonical.com> Message-ID: <151070701342.8069.11254376698817747769.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/490320 Committed: https://git.openstack.org/cgit/openstack/heat/commit/?id=8cdfc3b293027292d21974b8152f42426d1f61ae Submitter: Zuul Branch: master commit 8cdfc3b293027292d21974b8152f42426d1f61ae Author: huangtianhua Date: Thu Aug 3 11:56:11 2017 +0800 Don't return the sensitive information to user We return back the sensitive information to user when some exceptions happened, for example, when DBError happened, we return the whole sql statement to user, it's not safe. This patch changes to return the message if the exception is the HeatException, otherwise the message won't be revealed to user. Change-Id: I6e01b1003a39106274e79c3b413917a30b5651b6 Closes-Bug: #1708122 ** Changed in: heat Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1708122 Title: Don't return back the sensitive information to user Status in OpenStack Heat: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: We return back the sensitive information to user when some exception happen, for example, when DBError happened, we will return the whole sql statement to user, it's not safe, also we return the traceback to user, it's not necessary. Maybe we can do the same thing like nova and cinder to add an attribute 'safe' for some exceptions to decide whether to return the information like the error message details to user. To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1708122/+subscriptions From fungi at yuggoth.org Wed Nov 15 20:35:57 2017 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 15 Nov 2017 20:35:57 -0000 Subject: [Openstack-security] [Bug 1732294] Re: Probable DOS in linuxbridge References: <151070054181.1317.15566312997210327161.malonedeb@chaenomeles.canonical.com> Message-ID: <151077815746.28545.4639191625267520794.malone@gac.canonical.com> Thanks for the heads up! It's our policy to go ahead and end embargoes once an issue is publicly disclosed, so we'll move forward triaging this as class C2 "A vulnerability, but not in OpenStack supported code, e.g., in a dependency" per our report taxonomy: https://security.openstack.org /vmt-process.html#incident-report-taxonomy Adding a new OSSN task in case the security note editors want to publish something about this prior to or once the kernel fix is available. ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - - -- - We experienced a DOS yesterday on a system (not openstack based) which would have been mitigated if a mac address whitelist in ebtables had occurred in the nat PREROUTING chain rather than the filter FORWARD chain. At least with kernel version 4.9, with rapidly cycling mac addresses the linux bridge appears to get bogged down in learning new MAC addresses if this is not explicitly turned off with brctl setageing 0. We deployed a workaround to our own infrastructure but I believe https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py#n158 means that openstack has the same vulnerability. It should be possible to move all logic related to checking the input to the ebtables nat PREROUTING chain using the ebtables_nat module. To duplicate, in a VM on a host with bridged networking and mac spoofing protection in place, install dsniff and run: macof -i -s -d -n 50000000 &> /dev/null Observe on the host that ksoftirqd usage goes to near 100% on one core, that 'perf top' will show br_fdb_update as taking significant resources, and that 'brctl showmacs ' will probably hang. ** Information type changed from Private Security to Public ** Tags added: security ** Also affects: ossn Importance: Undecided Status: New ** Changed in: ossa Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1732294 Title: Probable DOS in linuxbridge Status in neutron: New Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: New Bug description: We experienced a DOS yesterday on a system (not openstack based) which would have been mitigated if a mac address whitelist in ebtables had occurred in the nat PREROUTING chain rather than the filter FORWARD chain. At least with kernel version 4.9, with rapidly cycling mac addresses the linux bridge appears to get bogged down in learning new MAC addresses if this is not explicitly turned off with brctl setageing 0. We deployed a workaround to our own infrastructure but I believe https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py#n158 means that openstack has the same vulnerability. It should be possible to move all logic related to checking the input to the ebtables nat PREROUTING chain using the ebtables_nat module. To duplicate, in a VM on a host with bridged networking and mac spoofing protection in place, install dsniff and run: macof -i -s -d -n 50000000 &> /dev/null Observe on the host that ksoftirqd usage goes to near 100% on one core, that 'perf top' will show br_fdb_update as taking significant resources, and that 'brctl showmacs ' will probably hang. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1732294/+subscriptions From 1732294 at bugs.launchpad.net Thu Nov 16 00:28:12 2017 From: 1732294 at bugs.launchpad.net (Brian Haley) Date: Thu, 16 Nov 2017 00:28:12 -0000 Subject: [Openstack-security] [Bug 1732294] Re: Probable DOS in linuxbridge References: <151070054181.1317.15566312997210327161.malonedeb@chaenomeles.canonical.com> Message-ID: <151079209485.1647.1415177839326657152.launchpad@chaenomeles.canonical.com> ** Changed in: neutron Importance: Undecided => Critical -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1732294 Title: Probable DOS in linuxbridge Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: New Bug description: We experienced a DOS yesterday on a system (not openstack based) which would have been mitigated if a mac address whitelist in ebtables had occurred in the nat PREROUTING chain rather than the filter FORWARD chain. At least with kernel version 4.9, with rapidly cycling mac addresses the linux bridge appears to get bogged down in learning new MAC addresses if this is not explicitly turned off with brctl setageing 0. We deployed a workaround to our own infrastructure but I believe https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py#n158 means that openstack has the same vulnerability. It should be possible to move all logic related to checking the input to the ebtables nat PREROUTING chain using the ebtables_nat module. To duplicate, in a VM on a host with bridged networking and mac spoofing protection in place, install dsniff and run: macof -i -s -d -n 50000000 &> /dev/null Observe on the host that ksoftirqd usage goes to near 100% on one core, that 'perf top' will show br_fdb_update as taking significant resources, and that 'brctl showmacs ' will probably hang. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1732294/+subscriptions From 1732294 at bugs.launchpad.net Thu Nov 16 00:28:32 2017 From: 1732294 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 16 Nov 2017 00:28:32 -0000 Subject: [Openstack-security] [Bug 1732294] Re: Probable DOS in linuxbridge References: <151070054181.1317.15566312997210327161.malonedeb@chaenomeles.canonical.com> Message-ID: <151079211306.8034.16789753615728661531.malone@soybean.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/520249 ** Changed in: neutron Status: New => In Progress ** Changed in: neutron Assignee: (unassigned) => Brian Haley (brian-haley) -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1732294 Title: Probable DOS in linuxbridge Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: New Bug description: We experienced a DOS yesterday on a system (not openstack based) which would have been mitigated if a mac address whitelist in ebtables had occurred in the nat PREROUTING chain rather than the filter FORWARD chain. At least with kernel version 4.9, with rapidly cycling mac addresses the linux bridge appears to get bogged down in learning new MAC addresses if this is not explicitly turned off with brctl setageing 0. We deployed a workaround to our own infrastructure but I believe https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py#n158 means that openstack has the same vulnerability. It should be possible to move all logic related to checking the input to the ebtables nat PREROUTING chain using the ebtables_nat module. To duplicate, in a VM on a host with bridged networking and mac spoofing protection in place, install dsniff and run: macof -i -s -d -n 50000000 &> /dev/null Observe on the host that ksoftirqd usage goes to near 100% on one core, that 'perf top' will show br_fdb_update as taking significant resources, and that 'brctl showmacs ' will probably hang. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1732294/+subscriptions From 1732294 at bugs.launchpad.net Thu Nov 16 15:35:52 2017 From: 1732294 at bugs.launchpad.net (Brian Haley) Date: Thu, 16 Nov 2017 15:35:52 -0000 Subject: [Openstack-security] [Bug 1732294] Re: Probable DOS in linuxbridge References: <151070054181.1317.15566312997210327161.malonedeb@chaenomeles.canonical.com> Message-ID: <151084655271.28545.5343041454630953525.malone@gac.canonical.com> Sarah - is it possible for you to test my proposed change? It's basically doing what you said - move the rules to the nat table PREROUTING chain. Thanks. -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1732294 Title: Probable DOS in linuxbridge Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: New Bug description: We experienced a DOS yesterday on a system (not openstack based) which would have been mitigated if a mac address whitelist in ebtables had occurred in the nat PREROUTING chain rather than the filter FORWARD chain. At least with kernel version 4.9, with rapidly cycling mac addresses the linux bridge appears to get bogged down in learning new MAC addresses if this is not explicitly turned off with brctl setageing 0. We deployed a workaround to our own infrastructure but I believe https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py#n158 means that openstack has the same vulnerability. It should be possible to move all logic related to checking the input to the ebtables nat PREROUTING chain using the ebtables_nat module. To duplicate, in a VM on a host with bridged networking and mac spoofing protection in place, install dsniff and run: macof -i -s -d -n 50000000 &> /dev/null Observe on the host that ksoftirqd usage goes to near 100% on one core, that 'perf top' will show br_fdb_update as taking significant resources, and that 'brctl showmacs ' will probably hang. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1732294/+subscriptions From gerrit2 at review.openstack.org Tue Nov 21 11:00:08 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 21 Nov 2017 11:00:08 +0000 Subject: [Openstack-security] [openstack/swauth] SecurityImpact review request change openstack%2Fswauth~master~I0d01e8e95400c82ef25f98e2d269532e83233c2c Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/521808 Log: commit f38fc679eef7a46c53bcf907975dd1e4511b94eb Author: Pavel Kvasnicka Date: Tue Nov 21 09:38:09 2017 +0100 Hash token before storing it in Swift Swauth uses token value as object name. Object names are logged in proxy and object servers. Anybody with access to proxy/object server logs can see token values. Attacker can use this token to access user's data in Swift store. Instead of token, hashed token (with HASH_PATH_PREFIX and HASH_PATH_SUFFIX) is used as object name now. WARNING: In deployments without memcached this patch logs out all users because tokens became invalid. CVE-2017-16613 SecurityImpact Closes-Bug: #1655781 Change-Id: I0d01e8e95400c82ef25f98e2d269532e83233c2c From gerrit2 at review.openstack.org Tue Nov 21 11:03:11 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 21 Nov 2017 11:03:11 +0000 Subject: [Openstack-security] [openstack/swauth] SecurityImpact review request change openstack%2Fswauth~master~I0d01e8e95400c82ef25f98e2d269532e83233c2c Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/521808 Log: commit 70af7986265a3defea054c46efc82d0698917298 Author: Pavel Kvasnicka Date: Tue Nov 21 09:38:09 2017 +0100 Hash token before storing it in Swift Swauth uses token value as object name. Object names are logged in proxy and object servers. Anybody with access to proxy/object server logs can see token values. Attacker can use this token to access user's data in Swift store. Instead of token, hashed token (with HASH_PATH_PREFIX and HASH_PATH_SUFFIX) is used as object name now. WARNING: In deployments without memcached this patch logs out all users because tokens became invalid. CVE-2017-16613 SecurityImpact Closes-Bug: #1655781 Change-Id: I0d01e8e95400c82ef25f98e2d269532e83233c2c From gerrit2 at review.openstack.org Tue Nov 21 14:25:05 2017 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 21 Nov 2017 14:25:05 +0000 Subject: [Openstack-security] [openstack/castellan] SecurityImpact review request change openstack%2Fcastellan~master~Ia5316490201c33e23a4206838d5a4fb3dd00f527 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/514734 Log: commit fc0fc79eb63130d9f0f4bcc66f7a6f17c41d1144 Author: Alan Bishop Date: Tue Oct 24 14:53:36 2017 +0000 Support handling legacy all-zeros key ID This patch addresses a specific use case, where a user has encrypted volumes based on the fixed_key used by Cinder's and Nova's ConfKeyManager. The user wishes to switch to Barbican, but existing volumes must continue to function during the migration period. The code conditionally adds a shim around the backend KeyManager when both of these conditions are met: 1) The configuration contains a fixed_key value. This essentially signals the ConfKeyManager has been in use at one time 2) The current backend is *not* the ConfKeyManager When the shim is active, a MigrationKeyManager class is dynamically created that extends the backend's KeyManager class. The MigrationKeyManager exists solely to override two functions: o The KeyManager.get() function detects requests for the secret associated with the fixed_key, which is identified by an all-zeros key ID. - Requests for the all-zeros key ID are handled by mimicing the ConfKeyManager's response, which is a secret derived from the fixed_key. - Requests for any other key ID are passed on to the real backend. o The KeyManager.delete() function is similar: - Requests to delete the all-zeros key ID are essentially ignored, just as is done by the ConfKeyManager. - Requests to delete any other key ID are passed on to the real backend. All other KeyManager functions are not overridden, and will therefore be handled directly by the real backend. SecurityImpact Change-Id: Ia5316490201c33e23a4206838d5a4fb3dd00f527 From fungi at yuggoth.org Tue Nov 21 19:19:17 2017 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 21 Nov 2017 19:19:17 -0000 Subject: [Openstack-security] [Bug 1733289] Re: Image data stays in store (filesystem store) if image is deleted after staging call References: <151116941982.13328.2928212645411703751.malonedeb@soybean.canonical.com> Message-ID: <151129195721.15063.4603602397449701308.malone@soybean.canonical.com> Thanks. In that case, treating as a normal Public bug tagged as a potential security hardening opportunity. ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1733289 Title: Image data stays in store (filesystem store) if image is deleted after staging call Status in Glance: New Status in OpenStack Security Advisory: Won't Fix Bug description: Trying to delete image after staging call image gets deleted from the database, but image data remains in the backend ('/tmp/staging' directory). NOTE: This issue will occur only if image-import is enabled in the deployment i.e. 'enable_image_import' is set to True in glance- api.conf Steps to reproduce: 1. Create image    $ glance image-create --container-format ami --disk-format ami --name test_image 2. Add image to staging area using stage call    $ glance image-stage 3. Verify that image is uploaded to staging area i.e. in '/tmp/staging' area    $ ls -la /tmp/staging/    Output: -rw-r--r--. 1 centos centos 313 Nov 20 09:05 /tmp/staging/ 4. Delete the image    $ glance image-delete 5. Verify image-list does not show deleted image    $ glance image-list 6. Verify that image is still present in staging area i.e. in '/tmp/staging' area    $ ls -la /tmp/staging/    Output: -rw-r--r--. 1 centos centos 313 Nov 20 09:05 /tmp/staging/ Image gets deleted from the database but image data presents in the staging area i.e. in '/tmp/staging' directory. Actually after deleting the image after staging call it should be cleared from staging area as well. Attack scenario here is to create/stage/delete a lot of large size images using DoS the temporary image backend by filling it up. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1733289/+subscriptions From 1733289 at bugs.launchpad.net Mon Nov 27 06:37:34 2017 From: 1733289 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 27 Nov 2017 06:37:34 -0000 Subject: [Openstack-security] [Bug 1733289] Re: Image data stays in store (filesystem store) if image is deleted after staging call References: <151116941982.13328.2928212645411703751.malonedeb@soybean.canonical.com> Message-ID: <151176465407.14475.1802271221903152375.malone@wampee.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/523029 ** Changed in: glance Status: New => In Progress -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1733289 Title: Image data stays in store (filesystem store) if image is deleted after staging call Status in Glance: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Trying to delete image after staging call image gets deleted from the database, but image data remains in the backend ('/tmp/staging' directory). NOTE: This issue will occur only if image-import is enabled in the deployment i.e. 'enable_image_import' is set to True in glance- api.conf Steps to reproduce: 1. Create image    $ glance image-create --container-format ami --disk-format ami --name test_image 2. Add image to staging area using stage call    $ glance image-stage 3. Verify that image is uploaded to staging area i.e. in '/tmp/staging' area    $ ls -la /tmp/staging/    Output: -rw-r--r--. 1 centos centos 313 Nov 20 09:05 /tmp/staging/ 4. Delete the image    $ glance image-delete 5. Verify image-list does not show deleted image    $ glance image-list 6. Verify that image is still present in staging area i.e. in '/tmp/staging' area    $ ls -la /tmp/staging/    Output: -rw-r--r--. 1 centos centos 313 Nov 20 09:05 /tmp/staging/ Image gets deleted from the database but image data presents in the staging area i.e. in '/tmp/staging' directory. Actually after deleting the image after staging call it should be cleared from staging area as well. Attack scenario here is to create/stage/delete a lot of large size images using DoS the temporary image backend by filling it up. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1733289/+subscriptions From fungi at yuggoth.org Tue Nov 28 17:51:09 2017 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 28 Nov 2017 17:51:09 -0000 Subject: [Openstack-security] [Bug 1711117] Re: paste_deploy flavor in sample configuration file shows misleading default References: <150288400998.8690.1208903786666789257.malonedeb@soybean.canonical.com> Message-ID: <151189146913.20166.17692522498475179495.malone@chaenomeles.canonical.com> Apologies, we seem to have overlooked opening this. ** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - - -- - The "flavor" option of the "[paste_deploy]" section defaults to "None", but the sample configuration and documentation [1] suggests that it is "keystone". This can lead to unsecure deployments without authentication. The "glance-api.conf" file shows the following:     #     # Deployment flavor to use in the server application pipeline.     #     # Provide a string value representing the appropriate deployment     # flavor used in the server application pipleline. This is typically     # the partial name of a pipeline in the paste configuration file with     # the service name removed.     #     # For example, if your paste section name in the paste configuration     # file is [pipeline:glance-api-keystone], set ``flavor`` to     # ``keystone``.     #     # Possible values:     # * String value representing a partial pipeline name.     #     # Related Options:     # * config_file     #     # (string value)     #flavor = keystone This is misleading and can lead operators to think that the default flavor being used is "keystone", but this is not the case:     DEBUG glance.common.config [-] paste_deploy.flavor = None log_opt_values /usr/lib/python2.7/dist-packages/oslo_config/cfg.py:2626 Previously, in Mitaka, the flavor was defined something like this:     # Partial name of a pipeline in your paste configuration file with the     # service name removed. For example, if your paste section name is     # [pipeline:glance-api-keystone] use the value "keystone" (string     # value)     #flavor = Therefore, somebody upgrading from a previous version would think that the default is now set to "keystone" instead of "None". In such cases the operator could remove the "flavor=keystone" definition, assuming that the default value is correct. Moreover, the configuration reference states that the default is "keystone" [1], but this is not the case as the option does not set a default vale, but a sample default [2] [1] https://docs.openstack.org/glance/latest/configuration/glance_api.html#paste_deploy [2] https://github.com/openstack/glance/blob/c4b0fbe632f759b00a1c326c17a05f134e93553d/glance/common/config.py#L33 Taking into account that if the flavor for paste is not set this will lead to a deployment without authentication. If the sample default is different from the actual default, this should be stated clearly in the comment for that option. ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1711117 Title: paste_deploy flavor in sample configuration file shows misleading default Status in Glance: New Status in OpenStack Security Advisory: Won't Fix Bug description: The "flavor" option of the "[paste_deploy]" section defaults to "None", but the sample configuration and documentation [1] suggests that it is "keystone". This can lead to unsecure deployments without authentication. The "glance-api.conf" file shows the following:     #     # Deployment flavor to use in the server application pipeline.     #     # Provide a string value representing the appropriate deployment     # flavor used in the server application pipleline. This is typically     # the partial name of a pipeline in the paste configuration file with     # the service name removed.     #     # For example, if your paste section name in the paste configuration     # file is [pipeline:glance-api-keystone], set ``flavor`` to     # ``keystone``.     #     # Possible values:     # * String value representing a partial pipeline name.     #     # Related Options:     # * config_file     #     # (string value)     #flavor = keystone This is misleading and can lead operators to think that the default flavor being used is "keystone", but this is not the case:     DEBUG glance.common.config [-] paste_deploy.flavor = None log_opt_values /usr/lib/python2.7/dist- packages/oslo_config/cfg.py:2626 Previously, in Mitaka, the flavor was defined something like this:     # Partial name of a pipeline in your paste configuration file with the     # service name removed. For example, if your paste section name is     # [pipeline:glance-api-keystone] use the value "keystone" (string     # value)     #flavor = Therefore, somebody upgrading from a previous version would think that the default is now set to "keystone" instead of "None". In such cases the operator could remove the "flavor=keystone" definition, assuming that the default value is correct. Moreover, the configuration reference states that the default is "keystone" [1], but this is not the case as the option does not set a default vale, but a sample default [2] [1] https://docs.openstack.org/glance/latest/configuration/glance_api.html#paste_deploy [2] https://github.com/openstack/glance/blob/c4b0fbe632f759b00a1c326c17a05f134e93553d/glance/common/config.py#L33 Taking into account that if the flavor for paste is not set this will lead to a deployment without authentication. If the sample default is different from the actual default, this should be stated clearly in the comment for that option. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1711117/+subscriptions