From 1824248 at bugs.launchpad.net Wed Dec 4 08:36:39 2019 From: 1824248 at bugs.launchpad.net (Slawek Kaplonski) Date: Wed, 04 Dec 2019 08:36:39 -0000 Subject: [Openstack-security] [Bug 1824248] Re: Security Group filtering hides rules from user References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157544859967.22292.3218211834355404320.malone@chaenomeles.canonical.com> This bug has had a related patch abandoned and has been automatically un-assigned due to inactivity. Please re-assign yourself if you are continuing work or adjust the state as appropriate if it is no longer valid. ** Changed in: neutron Assignee: Slawek Kaplonski (slaweq) => (unassigned) ** Tags added: timeout-abandon -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Wed Dec 4 08:36:44 2019 From: 1824248 at bugs.launchpad.net (Slawek Kaplonski) Date: Wed, 04 Dec 2019 08:36:44 -0000 Subject: [Openstack-security] [Bug 1824248] auto-abandon-script References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157544860410.22731.809213513469165734.malone@chaenomeles.canonical.com> This bug has had a related patch abandoned and has been automatically un-assigned due to inactivity. Please re-assign yourself if you are continuing work or adjust the state as appropriate if it is no longer valid. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Wed Dec 4 08:36:51 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 04 Dec 2019 08:36:51 -0000 Subject: [Openstack-security] [Bug 1824248] Change abandoned on neutron (stable/rocky) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157544861195.22400.16956681263341084593.malone@chaenomeles.canonical.com> Change abandoned by Slawek Kaplonski (skaplons at redhat.com) on branch: stable/rocky Review: https://review.opendev.org/688717 Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Wed Dec 4 08:37:20 2019 From: 1824248 at bugs.launchpad.net (Slawek Kaplonski) Date: Wed, 04 Dec 2019 08:37:20 -0000 Subject: [Openstack-security] [Bug 1824248] auto-abandon-script References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157544864069.31218.12342084291903109444.malone@wampee.canonical.com> This bug has had a related patch abandoned and has been automatically un-assigned due to inactivity. Please re-assign yourself if you are continuing work or adjust the state as appropriate if it is no longer valid. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Wed Dec 4 08:37:29 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 04 Dec 2019 08:37:29 -0000 Subject: [Openstack-security] [Bug 1824248] Change abandoned on neutron (stable/queens) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157544864957.22292.14750059287624447282.malone@chaenomeles.canonical.com> Change abandoned by Slawek Kaplonski (skaplons at redhat.com) on branch: stable/queens Review: https://review.opendev.org/688719 Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1824248 at bugs.launchpad.net Wed Dec 4 08:37:35 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 04 Dec 2019 08:37:35 -0000 Subject: [Openstack-security] [Bug 1824248] Change abandoned on neutron-tempest-plugin (master) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157544865509.22230.17024574485511388199.malone@soybean.canonical.com> Change abandoned by Slawek Kaplonski (skaplons at redhat.com) on branch: master Review: https://review.opendev.org/681912 Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1851587 at bugs.launchpad.net Wed Dec 4 18:29:59 2019 From: 1851587 at bugs.launchpad.net (Nick Tait) Date: Wed, 04 Dec 2019 18:29:59 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157548419993.7085.16990411817466611135.malone@gac.canonical.com> So far I agree with hardening classification. But what I don't yet understand is why revealing FQDN is a security problem. Pretend I am a non-admin, and I learn that there is a compute host available at www.example.com. Wouldn't I need valid access credentials to that host before viewing/tampering with it? In the physical world, leaking your home address to an adversary would be a security problem. How does this transfer into the digital example? In the case of OpenStack being run as a public cloud, the non-admin user has to be treated as a potentially untrustworthy person. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1851587 at bugs.launchpad.net Wed Dec 4 18:34:11 2019 From: 1851587 at bugs.launchpad.net (Nick Tait) Date: Wed, 04 Dec 2019 18:34:11 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157548445173.6824.17405498035033717274.malone@gac.canonical.com> Are there cases where a non-admin could cause a HypervisorUnavailable situation? -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1851587 at bugs.launchpad.net Thu Dec 5 01:09:45 2019 From: 1851587 at bugs.launchpad.net (melanie witt) Date: Thu, 05 Dec 2019 01:09:45 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157550818585.22184.17497751016561979125.malone@chaenomeles.canonical.com> Hi Nick, I hear you and IMHO revealing the FQDN is kind of a "soft" problem, as it could only hurt you (the deployer) if you've got your hypervisor exposed to the public internet and revealing its address is going to give someone the opportunity to launch a targeted attack on it to brute force the credentials (or whatever else). Having a hypervisor exposed to the internet isn't typical or recommended and probably (hopefully) nobody does that, but if they do, it could be a problem. Hence, this is a "hardening opportunity" and we've not proposed a patch to deal with it yet because (1) it's a "soft" problem and (2) it's not trivial to fix unless we just remove the FQDN from the exception message altogether (which I am personally fine with). To answer your last question, yes a non-admin user can see HypervisorUnavailable if, for example, the libvirt process is stopped or nova otherwise can't reach the libvirt monitor when they attempt to delete their server. This is rare I expect, but could happen. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): New Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1563954 at bugs.launchpad.net Fri Dec 6 10:41:34 2019 From: 1563954 at bugs.launchpad.net (XiaojueGuan) Date: Fri, 06 Dec 2019 10:41:34 -0000 Subject: [Openstack-security] [Bug 1563954] Re: use_forwarded_for exposes metadata References: <20160330160317.3139.18707.malonedeb@chaenomeles.canonical.com> Message-ID: <157562889448.31832.9244475103590033227.malone@wampee.canonical.com> this bug might already fixed, i tried to reproduce but failed with devstack. [stack at devstack devstack]$ openstack server create --image centos7 --flavor ds1G --network f2a39df5-0938-4973-9810-a80d341229bf --user-data /tmp/data test1 +-------------------------------------+------------------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | TuHp5eHEMMfW | | config_drive | | | created | 2019-12-06T10:32:10Z | | flavor | ds1G (d2) | | hostId | | | id | afdecb55-f3e1-4099-b947-1704112cb9ae | | image | centos7 (2f2396d2-b32e-4275-93b3-df9fb376dc36) | | key_name | None | | name | test1 | | progress | 0 | | project_id | 9aafb875525b45b79be2d1ca5d27ffb0 | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2019-12-06T10:32:10Z | | user_id | d994888b0c764c8288cc1162f69b8d8b | | volumes_attached | | +-------------------------------------+------------------------------------------------+ [stack at devstack devstack]$ curl -H 'X-Forwarded-For: 192.168.199.151' http://localhost:8775/latest/user-data/ 400 Bad Request

400 Bad Request

X-Instance-ID header is missing from request.

-- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1563954 Title: use_forwarded_for exposes metadata Status in OpenStack Compute (nova): Confirmed Status in OpenStack Security Advisory: Opinion Status in OpenStack Security Notes: Fix Released Bug description: The nova metadata service uses the remote address to determine which metadata to retrieve. In order to work behind a proxy there is an option use_forwarded_for which will use the X-Forwarded-For header to determine the remote IP. If this option is set then anyone who can access the metadata port can request metadata for any instance if they know the IP. The user data is also exposed. $ echo 123456 > /tmp/data $ openstack server create --image CentOS7 --flavor fedora --user-data /tmp/data test $ curl -H 'X-Forwarded-For: 10.0.0.7' http://localhost:8775/latest/user-data/ 123456 At a minimum this side-effect isn't documented anywhere I could find. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1563954/+subscriptions From 1563954 at bugs.launchpad.net Fri Dec 6 10:44:59 2019 From: 1563954 at bugs.launchpad.net (XiaojueGuan) Date: Fri, 06 Dec 2019 10:44:59 -0000 Subject: [Openstack-security] [Bug 1563954] Re: use_forwarded_for exposes metadata References: <20160330160317.3139.18707.malonedeb@chaenomeles.canonical.com> Message-ID: <157562909961.21973.11935386336403586456.malone@soybean.canonical.com> [stack at devstack devstack]$ ip addr | grep ens33 2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.199.151/24 brd 192.168.199.255 scope global noprefixroute ens33 [root at devstack nova]# git log commit d2bf17eaf4c66fc7ffec671e0d2d7ed2b4dde87c Merge: dd12b3b 0461921 Author: Zuul Date: Thu Dec 5 01:24:30 2019 +0000 Merge "Cache security group driver" commit dd12b3b407ea3d5b8cdaf404c43b7d25b2e3927a Merge: 0a1b604 846fc0a Author: Zuul Date: Thu Dec 5 00:30:00 2019 +0000 add my ip of my environment is above. and my current branch is at master shall we close this bug -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1563954 Title: use_forwarded_for exposes metadata Status in OpenStack Compute (nova): Confirmed Status in OpenStack Security Advisory: Opinion Status in OpenStack Security Notes: Fix Released Bug description: The nova metadata service uses the remote address to determine which metadata to retrieve. In order to work behind a proxy there is an option use_forwarded_for which will use the X-Forwarded-For header to determine the remote IP. If this option is set then anyone who can access the metadata port can request metadata for any instance if they know the IP. The user data is also exposed. $ echo 123456 > /tmp/data $ openstack server create --image CentOS7 --flavor fedora --user-data /tmp/data test $ curl -H 'X-Forwarded-For: 10.0.0.7' http://localhost:8775/latest/user-data/ 123456 At a minimum this side-effect isn't documented anywhere I could find. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1563954/+subscriptions From fungi at yuggoth.org Fri Dec 6 16:48:31 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 06 Dec 2019 16:48:31 -0000 Subject: [Openstack-security] [Bug 1563954] Re: use_forwarded_for exposes metadata References: <20160330160317.3139.18707.malonedeb@chaenomeles.canonical.com> Message-ID: <157565091110.22072.9216285755875837564.malone@soybean.canonical.com> Any idea what would have fixed it? My understanding of this report was that it's an architectural issue in deployments where if you set up a proxy (for example a load balancer) between the guest instances and the metadata service then you need to be able to tell the metadata service to look at the X-Forwarded-For header added by the load balancer to know which instance is making the request. If you set that and your network is mis-designed to allow instances to also contact the metadata service directly without passing through the load balancer, then the request from the instance can be specially constructed so as to include a spoofed X-Forwarded-For header which allows it to obtain metadata for a different instance associated with the address included in that injected header. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1563954 Title: use_forwarded_for exposes metadata Status in OpenStack Compute (nova): Confirmed Status in OpenStack Security Advisory: Opinion Status in OpenStack Security Notes: Fix Released Bug description: The nova metadata service uses the remote address to determine which metadata to retrieve. In order to work behind a proxy there is an option use_forwarded_for which will use the X-Forwarded-For header to determine the remote IP. If this option is set then anyone who can access the metadata port can request metadata for any instance if they know the IP. The user data is also exposed. $ echo 123456 > /tmp/data $ openstack server create --image CentOS7 --flavor fedora --user-data /tmp/data test $ curl -H 'X-Forwarded-For: 10.0.0.7' http://localhost:8775/latest/user-data/ 123456 At a minimum this side-effect isn't documented anywhere I could find. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1563954/+subscriptions From 1855902 at bugs.launchpad.net Tue Dec 10 16:31:32 2019 From: 1855902 at bugs.launchpad.net (Joris Hartog) Date: Tue, 10 Dec 2019 16:31:32 -0000 Subject: [Openstack-security] [Bug 1855902] [NEW] Inefficient Security Group listing Message-ID: <157599549287.6430.1011745077318200014.malonedeb@soybean.canonical.com> Public bug reported: Issue: Fetching a large Security Group list takes relatively long as several database queries are made for each Security Group. Context: Listing SG's takes around 9 seconds with ~500 existing SG's, 16 seconds with ~1000 SG's and around 30 seconds with ~1500 existing SG's, so this time seems to grow at least linearly with number of SG's. We've looked at flamegraphs of the neutron controller which show that the stack frame `/usr/lib/python2.7/site- packages/neutron/db/securitygroups_db.py:get_security_groups:166` splits into two long running functions, each taking about half of the time (one at line 112 and the other at 115). ```python 103 @classmethod 104 def get_objects(cls, context, _pager=None, validate_filters=True, 105 **kwargs): 106 # We want to get the policy regardless of its tenant id. We'll make 107 # sure the tenant has permission to access the policy later on. 108 admin_context = context.elevated() 109 with cls.db_context_reader(admin_context): 110 objs = super(RbacNeutronDbObjectMixin, 111 cls).get_objects(admin_context, _pager, 112 validate_filters, **kwargs) 113 result = [] 114 for obj in objs: 115 if not cls.is_accessible(context, obj): 116 continue 117 result.append(obj) 118 return result ``` We've also seen that the number of database queries also seems to grow linearly: * Listing ~500 SG's performs ~2100 queries * Listing ~1000 SG's performs ~3500 queries * Listing ~1500 SG's performs ~5200 queries This does not scale well, we're expecting a neglectable increase in listing time. Reproduction: * Create 1000 SG's * Execute `time openstack security group list` * Create 500 more SG's * Execute `time openstack security group list` Version: We're using neutron 14.0.2-1 on CentOS 7.7.1908. Perceived Severity: MEDIUM ** Affects: neutron Importance: Undecided Status: New ** Tags: group list security time -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1855902 Title: Inefficient Security Group listing Status in neutron: New Bug description: Issue: Fetching a large Security Group list takes relatively long as several database queries are made for each Security Group. Context: Listing SG's takes around 9 seconds with ~500 existing SG's, 16 seconds with ~1000 SG's and around 30 seconds with ~1500 existing SG's, so this time seems to grow at least linearly with number of SG's. We've looked at flamegraphs of the neutron controller which show that the stack frame `/usr/lib/python2.7/site- packages/neutron/db/securitygroups_db.py:get_security_groups:166` splits into two long running functions, each taking about half of the time (one at line 112 and the other at 115). ```python 103 @classmethod 104 def get_objects(cls, context, _pager=None, validate_filters=True, 105 **kwargs): 106 # We want to get the policy regardless of its tenant id. We'll make 107 # sure the tenant has permission to access the policy later on. 108 admin_context = context.elevated() 109 with cls.db_context_reader(admin_context): 110 objs = super(RbacNeutronDbObjectMixin, 111 cls).get_objects(admin_context, _pager, 112 validate_filters, **kwargs) 113 result = [] 114 for obj in objs: 115 if not cls.is_accessible(context, obj): 116 continue 117 result.append(obj) 118 return result ``` We've also seen that the number of database queries also seems to grow linearly: * Listing ~500 SG's performs ~2100 queries * Listing ~1000 SG's performs ~3500 queries * Listing ~1500 SG's performs ~5200 queries This does not scale well, we're expecting a neglectable increase in listing time. Reproduction: * Create 1000 SG's * Execute `time openstack security group list` * Create 500 more SG's * Execute `time openstack security group list` Version: We're using neutron 14.0.2-1 on CentOS 7.7.1908. Perceived Severity: MEDIUM To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1855902/+subscriptions From 1855902 at bugs.launchpad.net Wed Dec 11 04:05:43 2019 From: 1855902 at bugs.launchpad.net (Brian Haley) Date: Wed, 11 Dec 2019 04:05:43 -0000 Subject: [Openstack-security] [Bug 1855902] Re: Inefficient Security Group listing References: <157599549287.6430.1011745077318200014.malonedeb@soybean.canonical.com> Message-ID: <157603714400.4729.639297938665006508.malone@chaenomeles.canonical.com> *** This bug is a duplicate of bug 1830679 *** https://bugs.launchpad.net/bugs/1830679 This was fixed in https://review.opendev.org/#/c/665566/ and backported to stable/stein (15.0.0), it wasn't backported further. Closing as a duplicate of https://bugs.launchpad.net/neutron/+bug/1830679 ** This bug has been marked a duplicate of bug 1830679 Security groups RBAC cause a major performance degradation -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1855902 Title: Inefficient Security Group listing Status in neutron: New Bug description: Issue: Fetching a large Security Group list takes relatively long as several database queries are made for each Security Group. Context: Listing SG's takes around 9 seconds with ~500 existing SG's, 16 seconds with ~1000 SG's and around 30 seconds with ~1500 existing SG's, so this time seems to grow at least linearly with number of SG's. We've looked at flamegraphs of the neutron controller which show that the stack frame `/usr/lib/python2.7/site- packages/neutron/db/securitygroups_db.py:get_security_groups:166` splits into two long running functions, each taking about half of the time (one at line 112 and the other at 115). ```python 103 @classmethod 104 def get_objects(cls, context, _pager=None, validate_filters=True, 105 **kwargs): 106 # We want to get the policy regardless of its tenant id. We'll make 107 # sure the tenant has permission to access the policy later on. 108 admin_context = context.elevated() 109 with cls.db_context_reader(admin_context): 110 objs = super(RbacNeutronDbObjectMixin, 111 cls).get_objects(admin_context, _pager, 112 validate_filters, **kwargs) 113 result = [] 114 for obj in objs: 115 if not cls.is_accessible(context, obj): 116 continue 117 result.append(obj) 118 return result ``` We've also seen that the number of database queries also seems to grow linearly: * Listing ~500 SG's performs ~2100 queries * Listing ~1000 SG's performs ~3500 queries * Listing ~1500 SG's performs ~5200 queries This does not scale well, we're expecting a neglectable increase in listing time. Reproduction: * Create 1000 SG's * Execute `time openstack security group list` * Create 500 more SG's * Execute `time openstack security group list` Version: We're using neutron 14.0.2-1 on CentOS 7.7.1908. Perceived Severity: MEDIUM To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1855902/+subscriptions From mriedem.os at gmail.com Thu Dec 12 18:44:26 2019 From: mriedem.os at gmail.com (Matt Riedemann) Date: Thu, 12 Dec 2019 18:44:26 -0000 Subject: [Openstack-security] [Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users References: <157308280286.2833.4356246734465945672.malonedeb@gac.canonical.com> Message-ID: <157617627010.18387.7933103566205583129.launchpad@wampee.canonical.com> ** Changed in: nova Status: New => Triaged -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1851587 Title: HypervisorUnavailable error leaks compute host fqdn to non-admin users Status in OpenStack Compute (nova): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message. Steps to reproduce ================== 1. Spin up an instance with non-admin user credentials 2. To reproduce the error, stop the libvirtd service on the compute host containing instance 3. Delete the instance 4. Deletion fails providing HypervisorUnavailable error Expected result =============== Error does not show compute host fqdn to a non-admin user Actual result ============= #spin up an instance +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ #instance is running on compute-0 node (only admin knows this) [heat-admin at compute-0 ~]$ sudo virsh list --all Id Name State ---------------------------------------------------- 108 instance-00000092 running #stop libvirtd service [root at compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service [root at compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service ● tripleo_nova_libvirt.service - nova_libvirt container Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS) Main PID: 3783 Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl> Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb) Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container. Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container... Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla> Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container. #delete the instance, it leaks compute host fqdn to the non-admin user (overcloud) [stack at undercloud-0 ~]$ nova delete test-11869 Request to delete server test-11869 has been accepted. (overcloud) [stack at undercloud-0 ~]$ openstack server list --long +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ | 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | | +--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+ (overcloud) [stack at undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | config_drive | | | created | 2019-11-06T22:12:57Z | | description | None | | fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} | | flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' | | hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 | | id | 4f42886d-e1f8-4607-a09d-0dc12a681880 | | image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) | | key_name | None | | locked | False | | locked_reason | None | | name | test-11869 | | project_id | 6e39619e17a9478580c93120e1cb16bc | | properties | | | server_groups | [] | | status | ERROR | | tags | [] | | trusted_image_certificates | None | | updated | 2019-11-06T23:01:45Z | | user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 | | volumes_attached | | +-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions From 1824248 at bugs.launchpad.net Thu Dec 12 23:15:06 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 12 Dec 2019 23:15:06 -0000 Subject: [Openstack-security] [Bug 1824248] Fix merged to neutron (master) References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157619250661.18725.4922990687344540102.malone@wampee.canonical.com> Reviewed: https://review.opendev.org/681910 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b898d2e3c08b50e576ee849fbe8614c66f360c62 Submitter: Zuul Branch: master commit b898d2e3c08b50e576ee849fbe8614c66f360c62 Author: Slawek Kaplonski Date: Thu Sep 12 22:02:52 2019 +0200 List SG rules which belongs to tenant's SG In case when user's security group contains rules created e.g. by admin, and such rules has got admin's tenant as tenant_id, owner of security group should be able to see those rules. Some time ago this was addressed for request: GET /v2.0/security-groups/ But it is also required to behave in same way for GET /v2.0/security-group-rules So this patch fixes this behaviour for listing of security group rules. To achieve that this patch also adds new policy rule: ADMIN_OWNER_OR_SG_OWNER which is similar to already existing ADMIN_OWNER_OR_NETWORK_OWNER used e.g. for listing or creating ports. Change-Id: I09114712582d2d38d14cf1683b87a8ce3a8e8c3c Closes-Bug: #1824248 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 1597557 at bugs.launchpad.net Wed Dec 18 10:15:03 2019 From: 1597557 at bugs.launchpad.net (Radomir Dopieralski) Date: Wed, 18 Dec 2019 10:15:03 -0000 Subject: [Openstack-security] [Bug 1597557] Re: getting CSRF token missing or incorrect. /api/nova/servers/ when CSRF_COOKIE_HTTPONLY=True References: <20160630003449.23279.98663.malonedeb@gac.canonical.com> Message-ID: <157666410358.27579.3944256527312094619.malone@chaenomeles.canonical.com> Enabling CSRF_COOKIE_HTTPONLY = True doesn't give any protection, see https://docs.djangoproject.com/en/3.0/ref/settings/#csrf-cookie-httponly ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1597557 Title: getting CSRF token missing or incorrect. /api/nova/servers/ when CSRF_COOKIE_HTTPONLY=True Status in OpenStack Dashboard (Horizon): Invalid Bug description: Using stable/mitkaka if I set CSRF_COOKIE_HTTPONLY=True in local_settings.py, when i try to launch an instance i get Forbidden (CSRF token missing or incorrect.): /api/nova/servers/ If i set it to false (or don't set it) then it works fine. This is what does not work # If Horizon is being served through SSL, then uncomment the following two # settings to better secure the cookies from security exploits CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True # prevent certain client-side attacks, such as cross-site scripting CSRF_COOKIE_HTTPONLY = True SESSION_COOKIE_HTTPONLY = True this is what does work # If Horizon is being served through SSL, then uncomment the following two # settings to better secure the cookies from security exploits CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True # prevent certain client-side attacks, such as cross-site scripting CSRF_COOKIE_HTTPONLY = False SESSION_COOKIE_HTTPONLY = True To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1597557/+subscriptions From 1855902 at bugs.launchpad.net Wed Dec 18 11:50:04 2019 From: 1855902 at bugs.launchpad.net (Kevin S) Date: Wed, 18 Dec 2019 11:50:04 -0000 Subject: [Openstack-security] [Bug 1855902] Re: Inefficient Security Group listing References: <157599549287.6430.1011745077318200014.malonedeb@soybean.canonical.com> Message-ID: <157666980436.2622.14134679744543887764.malone@soybean.canonical.com> *** This bug is a duplicate of bug 1830679 *** https://bugs.launchpad.net/bugs/1830679 This issue isn't fixed by https://review.opendev.org/#/c/665566/, we have verified the changes have been applied. So we are running neutron version 14.0.3-1 now as we believe it has been backported to 14.0.3: https://bugs.launchpad.net/neutron/+bug/1830679/comments/21 Examples: ~100 security groups: openstack security group list takes about: ~3 seconds ~1000 security groups: openstack security group list takes about: ~16 seconds ~1500 security groups: openstack security group list takes about: ~30 seconds -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1855902 Title: Inefficient Security Group listing Status in neutron: New Bug description: Issue: Fetching a large Security Group list takes relatively long as several database queries are made for each Security Group. Context: Listing SG's takes around 9 seconds with ~500 existing SG's, 16 seconds with ~1000 SG's and around 30 seconds with ~1500 existing SG's, so this time seems to grow at least linearly with number of SG's. We've looked at flamegraphs of the neutron controller which show that the stack frame `/usr/lib/python2.7/site- packages/neutron/db/securitygroups_db.py:get_security_groups:166` splits into two long running functions, each taking about half of the time (one at line 112 and the other at 115). ```python 103 @classmethod 104 def get_objects(cls, context, _pager=None, validate_filters=True, 105 **kwargs): 106 # We want to get the policy regardless of its tenant id. We'll make 107 # sure the tenant has permission to access the policy later on. 108 admin_context = context.elevated() 109 with cls.db_context_reader(admin_context): 110 objs = super(RbacNeutronDbObjectMixin, 111 cls).get_objects(admin_context, _pager, 112 validate_filters, **kwargs) 113 result = [] 114 for obj in objs: 115 if not cls.is_accessible(context, obj): 116 continue 117 result.append(obj) 118 return result ``` We've also seen that the number of database queries also seems to grow linearly: * Listing ~500 SG's performs ~2100 queries * Listing ~1000 SG's performs ~3500 queries * Listing ~1500 SG's performs ~5200 queries This does not scale well, we're expecting a neglectable increase in listing time. Reproduction: * Create 1000 SG's * Execute `time openstack security group list` * Create 500 more SG's * Execute `time openstack security group list` Version: We're using neutron 14.0.2-1 on CentOS 7.7.1908. Perceived Severity: MEDIUM To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1855902/+subscriptions From 1734320 at bugs.launchpad.net Wed Dec 18 13:36:14 2019 From: 1734320 at bugs.launchpad.net (Paul Peereboom) Date: Wed, 18 Dec 2019 13:36:14 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157667617493.5587.17691613788750377588.malone@gac.canonical.com> Hi Sean, I've finally managed to setup a Stein environment to test the isolate_vif option. I tried to set isolate_vif in nova.conf but instances than fails to spawn with the following in the nova- compute.log: 2019-12-18 13:32:07.180 30660 ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-303dc31c-3c1e-446e-8f1e-83b604cee0c6 68910e67a8cb49e1af2d2bb3b4e3fa49 d5d22a5d7e194a0789a07a3c4d2affd6 - default default] Unable to e xecute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--may-exist', 'add-port', 'br-int', 'qvo5639e4bf-52', '--', 'set', 'Interface', 'qvo5639e4bf-52', 'ext ernal_ids:iface-id=5639e4bf-52d7-4674-9084-e258acb47fae', 'external_ids:iface-status=active', 'external_ids:attached-mac=fa:16:3e:d8:ff:c8', 'external_ids:vm-uuid=1a6cfd1e-7414-4ac3-8c5d-81918a445ab5', 't ag=4095']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-port br-int qvo5639e4bf-52 -- set Interface qvo5639e4bf-52 external_ids:iface-id=5639e4bf-52d7-4674-9084 -e258acb47fae external_ids:iface-status=active external_ids:attached-mac=fa:16:3e:d8:ff:c8 external_ids:vm-uuid=1a6cfd1e-7414-4ac3-8c5d-81918a445ab5 tag=4095 Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: Interface does not contain a column whose name matches "tag"\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2019-12-18 13:32:07.254 30660 INFO os_vif [req-303dc31c-3c1e-446e-8f1e-83b604cee0c6 68910e67a8cb49e1af2d2bb3b4e3fa49 d5d22a5d7e194a0789a07a3c4d2affd6 - default default] Successfully plugged vif VIFBridge( active=False,address=fa:16:3e:d8:ff:c8,bridge_name='qbr5639e4bf-52',has_traffic_filtering=True,id=5639e4bf-52d7-4674-9084-e258acb47fae,network=Network(c4197b2d-f568-4bd2-9b0a-eb0fa05e7428),plugin='ovs',po rt_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5639e4bf-52') 2019-12-18 13:32:07.275 30660 ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-be4f125b-e4e5-427d-ba84-b0fb86478807 68910e67a8cb49e1af2d2bb3b4e3fa49 d5d22a5d7e194a0789a07a3c4d2affd6 - default default] Unable to e xecute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp:127.0.0.1:6640', '--', '--may-exist', 'add-port', 'br-int', 'qvo8e2e0cb9-27', '--', 'set', 'Interface', 'qvo8e2e0cb9-27', 'ext ernal_ids:iface-id=8e2e0cb9-2785-4d09-835d-f20897035cf2', 'external_ids:iface-status=active', 'external_ids:attached-mac=fa:16:3e:41:40:43', 'external_ids:vm-uuid=4773263b-d538-4e1b-b405-7c39c96eb7e2', 't ag=4095']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-port br-int qvo8e2e0cb9-27 -- set Interface qvo8e2e0cb9-27 external_ids:iface-id=8e2e0cb9-2785-4d09-835d -f20897035cf2 external_ids:iface-status=active external_ids:attached-mac=fa:16:3e:41:40:43 external_ids:vm-uuid=4773263b-d538-4e1b-b405-7c39c96eb7e2 tag=4095 Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: Interface does not contain a column whose name matches "tag"\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. Openstack version: Stein [root at compute1 (GN0) ~]# ovs-vsctl -V ovs-vsctl (Open vSwitch) 2.11.0 DB Schema 7.16.1 [root at compute1 (GN0) ~]# rpm -qa | grep neutron openstack-neutron-common-14.0.3-0.20190923200444.5eb234b.el8ost.noarch openstack-neutron-openvswitch-14.0.3-0.20190923200444.5eb234b.el8ost.noarch python3-neutron-lib-1.25.0-0.20190521130309.fc2a810.el8ost.noarch python3-neutronclient-6.12.0-0.20190312100012.680b417.el8ost.noarch openstack-neutron-14.0.3-0.20190923200444.5eb234b.el8ost.noarch python3-neutron-14.0.3-0.20190923200444.5eb234b.el8ost.noarch openstack-neutron-ml2-14.0.3-0.20190923200444.5eb234b.el8ost.noarch [root at compute1 (GN0) ~]# rpm -qa | grep nova python3-novaclient-13.0.1-0.20190710173535.ef842ca.el8ost.noarch python3-nova-19.0.3-0.20190814170534.a8e19af.el8ost.noarch openstack-nova-common-19.0.3-0.20190814170534.a8e19af.el8ost.noarch openstack-nova-compute-19.0.3-0.20190814170534.a8e19af.el8ost.noarch Do I miss something? Regards, Paul -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1734320 at bugs.launchpad.net Wed Dec 18 13:57:17 2019 From: 1734320 at bugs.launchpad.net (Paul Peereboom) Date: Wed, 18 Dec 2019 13:57:17 -0000 Subject: [Openstack-security] [Bug 1734320] Re: Eavesdropping private traffic References: <151152217834.14483.1577991310209811902.malonedeb@soybean.canonical.com> Message-ID: <157667743804.14917.17435438018144121050.malone@wampee.canonical.com> Aha think I got it, looks like the tag order is wrong: Fails: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-port br-int qvo8e2e0cb9-27 -- set Interface qvoc94a40f4-a3 external_ids:iface-id=8e2e0cb9-2785-4d09-835d-f20897035cf2 external_ids:iface-status=active external_ids:attached-mac=fa:16:3e:41:40:43 external_ids:vm-uuid=4773263b-d538-4e1b-b405-7c39c96eb7e2 tag=4095 Works: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp:127.0.0.1:6640 -- --may-exist add-port br-int qvo8e2e0cb9-27 tag=4095 -- set Interface qvoc94a40f4-a3 external_ids:iface-id=8e2e0cb9-2785-4d09-835d-f20897035cf2 external_ids:iface-status=active external_ids:attached-mac=fa:16:3e:41:40:43 external_ids:vm-uuid=4773263b-d538-4e1b-b405-7c39c96eb7e2 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1734320 Title: Eavesdropping private traffic Status in neutron: Fix Committed Status in OpenStack Compute (nova): In Progress Status in os-vif: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Eavesdropping private traffic ============================= Abstract -------- We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node. Description ----------- During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants. If the port is administratively down during live migration, the port will remain in trunk mode indefinitely. Traffic is possible between ports is that are administratively down, even between tenants self-service networks. Conditions ---------- The following conditions are necessary. * Openvswitch Self-service networks * An Openstack administrator or an automated process needs to schedule a Live migration We tested this on newton. Issues ------ This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation. Issue #1 Initially creating a trunk port When the port is initially created, it is in trunk mode. This creates a fail-open situation. See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52 Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially. Issue #2 Order of creation. The instance is actually migrated before the (networking) configuration is completed. Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely. Issue #3 Not closing the port when it is down. Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged. https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995 Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens. Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules. Recommendation: Increase the port_dead openflow drop rule priority to MAX Timeline --------  2017-09-14 Discovery eavesdropping issue  2017-09-15 Verify workaround.  2017-10-04 Discovery port-down-traffic issue  2017-11-24 Vendor Disclosure to Openstack Steps to reproduce ------------------ 1. Attach an instance to two networks: admin$ openstack server create --nic net-id= --nic net-id = --image --flavor instance_temp 2. Attach a FIP to the instance to be able to log in to this instance 3. Verify: admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425 -9d3c-8e2cd383547d +-----------+-----------------------------------------------------------------------------+ | Field | Value | +-----------+-----------------------------------------------------------------------------+ | addresses | network1=192.168.99.8, ; network2=192.168.80.14 | | name | instance_temp | +-----------+-----------------------------------------------------------------------------+ 4. Ssh to the instance using network1 and run a tcpdump on the other port network2 [root at instance_temp]$ tcpdump -eeenni eth1 5. Get port-id of network2 admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ | ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b | | ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa | +------------+--------------------------------------+--------------------------------------+---------------+-------------------+ 6. Force port down on network 2 admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False 7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal: compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN 8. Verify the port is tagged with vlan 4095 compute1# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           tag: 4095           Interface "qvoa848520b-08" 9. Now live-migrate the instance: admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d 10. Verify the tag is gone on compute2, and take a deep breath compute2# ovs-vsctl show | grep -A3 qvoa848520b-08       Port "qvoa848520b-08"           Interface "qvoa848520b-08"       Port... compute2# echo "Wut!" 11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp [root at instance_temp] tcpdump -eenni eth1 13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28 13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28 13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28 13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28 13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28 13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28 13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28 13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28 13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28 13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20 13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28 13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28 13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28 13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28 13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24 13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20 Workaround ---------- We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes: /usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py: def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,                         instance_id, interface_type=None,                         vhost_server_path=None): + # ODCN: initialize port as dead + # ODCN: TODO: set drop flow     cmd = ['--', '--if-exists', 'del-port', dev, '--',             'add-port', bridge, dev, + 'tag=4095',             '--', 'set', 'Interface', dev,             'external-ids:iface-id=%s' % iface_id,             'external-ids:iface-status=active',             'external-ids:attached-mac=%s' % mac,             'external-ids:vm-uuid=%s' % instance_id]     if interface_type:         cmd += ['type=%s' % interface_type]     if vhost_server_path:         cmd += ['options:vhost-server-path=%s' % vhost_server_path]     return cmd https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995     def port_dead(self, port, log_errors=True):         '''Once a port has no binding, put it on the "dead vlan".         :param port: an ovs_lib.VifPort object.         '''         # Don't kill a port if it's already dead         cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",                                          log_errors=log_errors) + # ODCN GM 20170915 + if not cur_tag: + LOG.error('port_dead(): port %s has no tag', port.port_name) + # ODCN AJS 20170915 + if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG: - if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:            LOG.info('port_dead(): put port %s on dead vlan', port.port_name)            self.int_br.set_db_attribute("Port", port.port_name, "tag",                                          constants.DEAD_VLAN_TAG,                                          log_errors=log_errors)             self.int_br.drop_port(in_port=port.ofport) plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py     def drop_port(self, in_port): + # ODCN AJS 20171004: - self.install_drop(priority=2, in_port=in_port) + self.install_drop(priority=65535, in_port=in_port) Regards, ODC Noord. Gerhard Muntingh Albert Siersema Paul Peereboom To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1734320/+subscriptions From 1824248 at bugs.launchpad.net Fri Dec 20 00:43:49 2019 From: 1824248 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 20 Dec 2019 00:43:49 -0000 Subject: [Openstack-security] [Bug 1824248] Re: Security Group filtering hides rules from user References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <157680262937.27180.4977735320410618393.malone@chaenomeles.canonical.com> Reviewed: https://review.opendev.org/688715 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a6b55d760bee4ea228a466cd45193f77eae53978 Submitter: Zuul Branch: stable/train commit a6b55d760bee4ea228a466cd45193f77eae53978 Author: Slawek Kaplonski Date: Thu Sep 12 22:02:52 2019 +0200 List SG rules which belongs to tenant's SG In case when user's security group contains rules created e.g. by admin, and such rules has got admin's tenant as tenant_id, owner of security group should be able to see those rules. Some time ago this was addressed for request: GET /v2.0/security-groups/ But it is also required to behave in same way for GET /v2.0/security-group-rules So this patch fixes this behaviour for listing of security group rules. To achieve that this patch also adds new policy rule: ADMIN_OWNER_OR_SG_OWNER which is similar to already existing ADMIN_OWNER_OR_NETWORK_OWNER used e.g. for listing or creating ports. Change-Id: I09114712582d2d38d14cf1683b87a8ce3a8e8c3c Closes-Bug: #1824248 ** Tags added: in-stable-train -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions