From 1713783 at bugs.launchpad.net Fri Dec 7 06:39:40 2018 From: 1713783 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 07 Dec 2018 06:39:40 -0000 Subject: [Openstack-security] [Bug 1713783] Related fix merged to nova (stable/pike) References: <150402703560.19393.11649471063519714290.malonedeb@chaenomeles.canonical.com> Message-ID: <154416478047.16778.9129026645665024830.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/621203 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=c5cd6c8e50201dfaa192cdfbbc64d714eeadcd95 Submitter: Zuul Branch: stable/pike commit c5cd6c8e50201dfaa192cdfbbc64d714eeadcd95 Author: Matt Riedemann Date: Fri Sep 1 13:56:38 2017 -0400 Update docs for _destroy_evacuated_instances The docstring on this method no longer reflects reality, and hasn't since we started filtering on migration records starting back in Liberty. This change updates the docstring for what the method actually does now, and also includes a comment about why we include 'accepted' but not yet complete migrations when looking for things to remove from this source node. Change-Id: Ia0c67aeffe41e133de9dee5080be9c5ec1739aa6 Related-Bug: #1713783 (cherry picked from commit aec81e7d98c1ca082abd157dab5cc83b5bedfd89) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1713783 Title: After failed evacuation the recovered source compute tries to delete the instance Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: Fix Committed Status in OpenStack Compute (nova) pike series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== In case of a failed evacuation attempt the status of the migration is 'accepted' instead of 'failed' so when source compute is recovered the compute manager tries to delete the instance from the source host. However a secondary fault prevents deleting the allocation in placement so the actual deletion of the instance fails as well. Steps to reproduce ================== The following functional test reproduces the bug: https://review.openstack.org/#/c/498482/ What it does: initiate evacuation when no valid host is available and evacuation fails, but nova manager still tries to delete the instance. Logs:     2017-08-29 19:11:15,751 ERROR [oslo_messaging.rpc.server] Exception during message handling     NoValidHost: No valid host was found. There are not enough hosts available.     2017-08-29 19:11:16,103 INFO [nova.tests.functional.test_servers] Running periodic for compute1 (host1)     2017-08-29 19:11:16,115 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,120 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,131 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/allocations" status: 200 len: 152 microversion: 1.0     2017-08-29 19:11:16,138 INFO [nova.compute.resource_tracker] Final resource view: name=host1 phys_ram=8192MB used_ram=1024MB phys_disk=1028GB used_disk=1GB total_vcpus=10 used_vcpus=1 pci_stats=[]     2017-08-29 19:11:16,146 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,151 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,152 INFO [nova.tests.functional.test_servers] Running periodic for compute2 (host2)     2017-08-29 19:11:16,163 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,168 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,176 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/allocations" status: 200 len: 54 microversion: 1.0     2017-08-29 19:11:16,184 INFO [nova.compute.resource_tracker] Final resource view: name=host2 phys_ram=8192MB used_ram=512MB phys_disk=1028GB used_disk=0GB total_vcpus=10 used_vcpus=0 pci_stats=[]     2017-08-29 19:11:16,192 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" status: 200 len: 18 microversion: 1.1     2017-08-29 19:11:16,197 INFO [nova.api.openstack.placement.requestlog] 127.0.0.1 "GET /placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" status: 200 len: 401 microversion: 1.0     2017-08-29 19:11:16,198 INFO [nova.tests.functional.test_servers] Finished with periodics     2017-08-29 19:11:16,255 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/servers/5058200c-478e-4449-88c1-906fdd572662" status: 200 len: 1875 microversion: 2.53 time: 0.056198     2017-08-29 19:11:16,262 INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /v2.1/6f70656e737461636b20342065766572/os-migrations" status: 200 len: 373 microversion: 2.53 time: 0.004618     2017-08-29 19:11:16,280 INFO [nova.api.openstack.requestlog] 127.0.0.1 "PUT /v2.1/6f70656e737461636b20342065766572/os-services/c269bc74-4720-4de4-a6e5-889080b892a0" status: 200 len: 245 microversion: 2.53 time: 0.016442     2017-08-29 19:11:16,281 INFO [nova.service] Starting compute node (version 16.0.0)     2017-08-29 19:11:16,296 INFO [nova.compute.manager] Deleting instance as it has been evacuated from this host To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1713783/+subscriptions From 1734320 at bugs.launchpad.net Fri Dec 7 13:33:40 2018 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 07 Dec 2018 13:33:40 -0000 Subject: [Openstack-security] [Bug 1734320] Fix merged to os-vif (master) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <154418962044.16410.4131141337611090065.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/602384 Committed: https://git.openstack.org/cgit/openstack/os-vif/commit/?id=165ed325917e5deadb274ad9c122db157c0b55b2 Submitter: Zuul Branch: master commit 165ed325917e5deadb274ad9c122db157c0b55b2 Author: Sean Mooney Date: Thu Sep 13 16:50:33 2018 +0100 always create ovs port during plug - This change modifies the ovs plugin to always create the ovs interface in the ovs db. - This change enables the neutron l2 agent to configure the ovs interface by assigning a vlan tag and installing openflow rules as appropriate. - This change will reduce the live migration time for kernel ovs ports with hybrid plug false by creating the ovs port as part of plug before the migration starts. - This change adds the privsep decorator to delete_net_dev to account for it new usage via _unplug_vif_generic and address bug #1801072 Change-Id: Iaf15fa7a678ec2624f7c12f634269c465fbad930 Partial-Bug: #1734320 Closes-Bug: #1801072 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: In Progress Status in OpenStack Compute (nova): In Progress Status in os-vif: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1734320 at bugs.launchpad.net Fri Dec 7 21:13:46 2018 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 07 Dec 2018 21:13:46 -0000 Subject: [Openstack-security] [Bug 1734320] Fix merged to os-vif (master) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <154421722700.16896.11730446676585815360.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/612534 Committed: https://git.openstack.org/cgit/openstack/os-vif/commit/?id=d291213f1ea62f93008deef5224506fb5ea5ee0d Submitter: Zuul Branch: master commit d291213f1ea62f93008deef5224506fb5ea5ee0d Author: Sean Mooney Date: Tue Oct 23 00:10:38 2018 +0100 add isolate_vif config option - This change add a new isolate_vif config option to the OVS plugin. - The isolate_vif option defaults to False for backwards compatiblity with SDN-based deployments. - This change is a partial mitigation of bug 1734320, when isolate_vif is set to True os-vif will assign VIFs to the neutron l2 agent dead VLAN 4095. This should only be set when using the ml2/ovs neutron backend. Change-Id: I87ee9626cc6b4a01465a6b1908bc66bc7be0a4bc Partial-Bug: #1734320 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: In Progress Status in OpenStack Compute (nova): In Progress Status in os-vif: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1795800 at bugs.launchpad.net Mon Dec 10 00:11:08 2018 From: 1795800 at bugs.launchpad.net (Andy Ngo) Date: Mon, 10 Dec 2018 00:11:08 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Username enumeration via response timing difference References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154440066884.25286.11847629620821571673.malone@gac.canonical.com> Regardless of the "to fix or not to fix" question, can we please start the process of filing this bug with MITRE and get a CVE assigned for tracking? Perhaps we should consider disclosing this issue to the public via an official channel e.g. OpenStack maintainers? I think we have all agreed that this is indeed an information disclosure issue. The question of how easy it is to fix should not prevent us from carrying out our duty of care i.e. properly disclosing to keystone users. After all, security issues mean different things to different organisations and, along with that, carry difference severity. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Username enumeration via response timing difference Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From fungi at yuggoth.org Mon Dec 10 02:19:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Dec 2018 02:19:21 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Username enumeration via response timing difference References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154440836188.4927.11350757993004724893.malone@wampee.canonical.com> The OpenStack Vulnerability Management Team only requests CVE assignments to track vulnerabilities corresponding to fixes they're announcing via OpenStack Security Advisory publications, but anyone can request a CVE assignment from MITRE for any issue they'd like to track (or assign one themselves directly if they're a CNA). Please don't let the ongoing conversation stop you from obtaining a CVE for this particular bug if you want one, but please note within the bug report if you do so in order that we might avoid future duplication. Technically this concern is already disclosed to the public by nature of being a public bug report (I just now noticed I failed to clean up the embargo preamble in the description when making it public in October, but have gone ahead and taken care of it just now). Further, as it's tagged "security" all comments are also being copied to the public openstack-security mailing list (I have no idea what "OpenStack Maintainers" is in your last comment): http://lists.openstack.org/pipermail/openstack-security/2018-December/ ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Username enumeration via response timing difference Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From 1734320 at bugs.launchpad.net Fri Dec 14 05:40:17 2018 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 14 Dec 2018 05:40:17 -0000 Subject: [Openstack-security] [Bug 1734320] Fix merged to neutron (master) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <154476601752.8553.303193130982541579.malone@gac.canonical.com> Reviewed: https://review.openstack.org/616609 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=e3dc447b908f57e9acc0378111b8e09cbd88ddc5 Submitter: Zuul Branch: master commit e3dc447b908f57e9acc0378111b8e09cbd88ddc5 Author: Sean Mooney Date: Thu Nov 8 16:07:55 2018 +0000 raise priority of dead vlan drop - This change adds a max priority flow to drop all traffic that is associated with the DEAD VLAN 4095. - This change is part of a partial mitigation of bug 1734320. Without this change vlan 4095 traffic will be dropped via a low priority flow after being processed by part/all of the openflow pipeline. By raising the priorty and droping in table 0 we drop invalid packets as soon as they enter the pipeline. Change-Id: I3482c7c4f00942828cc9396cd2f3d646c9e8c9d1 Partial-Bug: #1734320 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: In Progress Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1795800 at bugs.launchpad.net Mon Dec 17 08:10:27 2018 From: 1795800 at bugs.launchpad.net (Andy Ngo) Date: Mon, 17 Dec 2018 08:10:27 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Username enumeration via response timing difference References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154503422739.1261.15407815018403158951.malone@chaenomeles.canonical.com> CVE-2018-20170 assigned ** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2018-20170 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Username enumeration via response timing difference Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From fungi at yuggoth.org Mon Dec 17 14:07:43 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Dec 2018 14:07:43 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Timing oracle in core auth plugin simplifies brute-forcing usernames References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154505566432.1647.16785499070024516316.launchpad@chaenomeles.canonical.com> ** Summary changed: - Username enumeration via response timing difference + Timing oracle in core auth plugin simplifies brute-forcing usernames -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From fungi at yuggoth.org Mon Dec 17 14:13:54 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Dec 2018 14:13:54 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Timing oracle in core auth plugin simplifies brute-forcing usernames References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154505603407.1647.2061755847148638850.malone@chaenomeles.canonical.com> I've updated the bug title to more accurately indicate this is a timing oracle in Keystone's core auth plugin, and so is still mitigated by the usual account brute-forcing defenses (e.g., enforcing strong authentication secrets, temporarily rejecting failing authentication attempts per source IP address, throttling calls to relevant API methods, et cetera). -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From fungi at yuggoth.org Mon Dec 17 14:22:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Dec 2018 14:22:33 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Timing oracle in core auth plugin simplifies brute-forcing usernames References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154505655346.28741.10900540827627148738.malone@wampee.canonical.com> Note that, unlike what the CVE dispute info suggests, the Keystone team has not concluded performance degradation would result from possible methods of reducing the efficacy of this particular oracle. They in fact indicated that fixes for this hardening opportunity would be welcome ("wishlist" importance, please see Colleen's comment #9). -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From 1795800 at bugs.launchpad.net Mon Dec 17 19:53:41 2018 From: 1795800 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 17 Dec 2018 19:53:41 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Timing oracle in core auth plugin simplifies brute-forcing usernames References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154507642162.28701.13639047832042098300.malone@wampee.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/625699 ** Changed in: keystone Status: Triaged => In Progress ** Changed in: keystone Assignee: (unassigned) => Gage Hugo (gagehugo) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"nonexisting", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"admin", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognizable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From 1795800 at bugs.launchpad.net Mon Dec 17 19:54:12 2018 From: 1795800 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 17 Dec 2018 19:54:12 -0000 Subject: [Openstack-security] [Bug 1795800] Fix proposed to keystone (master) References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154507645211.9018.10583128377377137501.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/625700 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"nonexisting", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"admin", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognizable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From lbragstad at gmail.com Mon Dec 17 19:58:56 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 17 Dec 2018 19:58:56 -0000 Subject: [Openstack-security] [Bug 1795800] Re: Timing oracle in core auth plugin simplifies brute-forcing usernames References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154507673659.1298.5218730892561944671.launchpad@chaenomeles.canonical.com> ** Description changed: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json - {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": - "nonexisting","domain":{"name": "Default"},"password": "devstacker"}}}}} + { + "auth":{ + "identity":{ + "methods":[ + "password" + ], + "password":{ + "user":{ + "name":"nonexisting", + "domain":{ + "name":"Default" + }, + "password":"devstacker" + } + } + } + } + } + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json - {"auth":{"identity":{"methods": ["password"],"password":{"user":{"name": - "admin","domain":{"name": "Default"},"password": "devstacker"}}}}} + { + "auth":{ + "identity":{ + "methods":[ + "password" + ], + "password":{ + "user":{ + "name":"admin", + "domain":{ + "name":"Default" + }, + "password":"devstacker" + } + } + } + } + } + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. - * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognisable with a good sample size. + * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognizable with a good sample size. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"nonexisting", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"admin", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognizable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions From 1795800 at bugs.launchpad.net Mon Dec 17 22:57:36 2018 From: 1795800 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 17 Dec 2018 22:57:36 -0000 Subject: [Openstack-security] [Bug 1795800] Change abandoned on keystone (master) References: <153854884921.22301.15072922945681177517.malonedeb@gac.canonical.com> Message-ID: <154508745683.913.5069163086783468451.malone@chaenomeles.canonical.com> Change abandoned by Gage Hugo (gagehugo at gmail.com) on branch: master Review: https://review.openstack.org/625700 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1795800 Title: Timing oracle in core auth plugin simplifies brute-forcing usernames Status in OpenStack Identity (keystone): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: The response times for POST /v3/auth/tokens are significantly higher for valid usernames compared to those of invalid ones, making it possible to enumerate users on the system. Examples: # For invalid username + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 141 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"nonexisting", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response Time: <150ms # For valid username ('admin' in this case) + Request POST /v3/auth/tokens HTTP/1.1 Host: hostname:5000 Connection: close Content-Length: 139 Content-Type: application/json { "auth":{ "identity":{ "methods":[ "password" ], "password":{ "user":{ "name":"admin", "domain":{ "name":"Default" }, "password":"devstacker" } } } } } + Response time: >600ms # Tested version v3.8 [UPDATE 3 Oct 2018 5:01 AEST] Looks like it's also possible to enumerate for valid "domain" too. There're 2 ways that I can see: * With valid username: use the above user enum bug to guess the valid username, then brute the "domain" parameter. Response times are significantly higher for valid compared to invalid domains. * Without valid username: get a baseline response time using invalid username AND invalid domain name. Bruteforce the "domain" param until the response time hits an average high. For me invalid domain falls in the 90-100ms range whereas valid ones show 100+ms. This one looks a bit more obscure i.e. timing difference is not as distinguishable, but should still be recognizable with a good sample size. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1795800/+subscriptions