From 1816727 at bugs.launchpad.net Thu Jun 6 06:24:39 2019 From: 1816727 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 06 Jun 2019 06:24:39 -0000 Subject: [Openstack-security] [Bug 1816727] Fix included in openstack/nova 19.0.1 References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <155980227935.23880.8627083139595185537.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/nova 19.0.1 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) rocky series: Fix Committed Status in OpenStack Compute (nova) stein series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions From fungi at yuggoth.org Fri Jun 14 01:35:53 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 Jun 2019 01:35:53 -0000 Subject: [Openstack-security] [Bug 1752792] Re: Iptables can allow open access to instances in certain situations References: <151996958593.18552.4795466235188466306.malonedeb@soybean.canonical.com> Message-ID: <156047615362.26991.5720216210932479153.malone@chaenomeles.canonical.com> Thanks Sam, I don't expect the Nova security reviewers to object as they're unable to reproduce this and consider the bug apparently fixed in the Queens release anyway. Setting the security advisory task to won't fix state, but can be revisited if any new information comes to light. ** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix ** Tags added: security ** Description changed: - This issue is being treated as a potential security risk under embargo. - Please do not make any public mention of embargoed (private) security - vulnerabilities before their coordinated publication by the OpenStack - Vulnerability Management Team in the form of an official OpenStack - Security Advisory. This includes discussion of the bug or associated - fixes in public forums such as mailing lists, code review systems and - bug trackers. Please also avoid private disclosure to other individuals - not already approved for access to this information, and provide this - same reminder to those who are made aware of the issue prior to - publication. All discussion should remain confined to this private bug - report, and any proposed fixes should be added to the bug as - attachments. - - -- - Have been debugging some instance hacks on our cloud and we have tracked it down to nova setting some iptables to open access to all instances. Environment: We are using neutron and have reproduced this on Newton and Ocata, it looks like Mitaka is not affected nova.conf options: use_neutron=True linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver neutron is using ML2/linuxbridge Steps to reproduce: # Clear out all iptables rules iptables -F; iptables -X; iptables -t nat -F; iptables -t nat -X; iptables -t mangle -F; iptables -t mangle -X; iptables -t raw -F; iptables -t raw -X # restart neutron linuxbridge systemctl restart neutron-linuxbridge-agent # Wait a couple of seconds for rules to be added sleep 5 # Restart nova-compute systemctl restart nova-compute If you look in iptables you will see in the FORWARD chain: -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-linuxbri-FORWARD The nova-compute-FORWARD chain will have -A nova-compute-FORWARD -i brq34bf77e6-d8 -j ACCEPT -A nova-compute-FORWARD -o brq34bf77e6-d8 -j ACCEPT -A nova-compute-FORWARD -o brq34bf77e6-d8 -j DROP It seems that once neutron agent does something on this host, eg. a new instance gets booted/deleted etc. then the rules change to: -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-linuxbri-FORWARD -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD So we don't see this much, just on quiet compute nodes it would seem. I feel like nova shouldn't be doing any iptables stuff so I'm scared I've got some config wrong but I haven't found anything so far and it looks the same as our Mitaka hosts which don't see this issue. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1752792 Title: Iptables can allow open access to instances in certain situations Status in OpenStack Compute (nova): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Have been debugging some instance hacks on our cloud and we have tracked it down to nova setting some iptables to open access to all instances. Environment: We are using neutron and have reproduced this on Newton and Ocata, it looks like Mitaka is not affected nova.conf options: use_neutron=True linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver neutron is using ML2/linuxbridge Steps to reproduce: # Clear out all iptables rules iptables -F; iptables -X; iptables -t nat -F; iptables -t nat -X; iptables -t mangle -F; iptables -t mangle -X; iptables -t raw -F; iptables -t raw -X # restart neutron linuxbridge systemctl restart neutron-linuxbridge-agent # Wait a couple of seconds for rules to be added sleep 5 # Restart nova-compute systemctl restart nova-compute If you look in iptables you will see in the FORWARD chain: -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-linuxbri-FORWARD The nova-compute-FORWARD chain will have -A nova-compute-FORWARD -i brq34bf77e6-d8 -j ACCEPT -A nova-compute-FORWARD -o brq34bf77e6-d8 -j ACCEPT -A nova-compute-FORWARD -o brq34bf77e6-d8 -j DROP It seems that once neutron agent does something on this host, eg. a new instance gets booted/deleted etc. then the rules change to: -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-linuxbri-FORWARD -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD So we don't see this much, just on quiet compute nodes it would seem. I feel like nova shouldn't be doing any iptables stuff so I'm scared I've got some config wrong but I haven't found anything so far and it looks the same as our Mitaka hosts which don't see this issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1752792/+subscriptions From 1816727 at bugs.launchpad.net Tue Jun 18 03:41:23 2019 From: 1816727 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 18 Jun 2019 03:41:23 -0000 Subject: [Openstack-security] [Bug 1816727] Fix included in openstack/nova 18.2.1 References: <155065649227.28374.17032096910895521610.malonedeb@chaenomeles.canonical.com> Message-ID: <156082928362.26897.16215803362580004252.malone@chaenomeles.canonical.com> This issue was fixed in the openstack/nova 18.2.1 release. -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1816727 Title: nova-novncproxy does not handle TCP RST cleanly when using SSL Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) rocky series: Fix Committed Status in OpenStack Compute (nova) stein series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Bug description: Description =========== We have nova-novncproxy configured to use SSL: ``` [DEFAULT] ssl_only=true cert = /etc/nova/ssl/certs/signing_cert.pem key = /etc/nova/ssl/private/signing_key.pem ... [vnc] enabled = True server_listen = "0.0.0.0" server_proxyclient_address = 192.168.237.81 novncproxy_host = 192.168.237.81 novncproxy_port = 5554 novncproxy_base_url = https://:6080/vnc_auto.html xvpvncproxy_host = 192.168.237.81 ``` We also have haproxy acting as a load balancer, but not terminating SSL. We have an haproxy health check configured like this for nova- novncproxy: ``` listen nova-novncproxy     # irrelevant config...     server 192.168.237.84:5554 check check-ssl verify none inter 2000 rise 5 fall 2 ``` where 192.168.237.81 is a virtual IP address and 192.168.237.84 is the node's individual IP address. With that health check enabled, we found the nova-novncproxy process CPU spiking and eventually causing the node to hang. With debug logging enabled, we noticed this in the nova-novncproxy logs: 2019-02-19 15:02:44.148 2880 INFO nova.console.websocketproxy [-] WebSocket server settings: 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Listen on 192.168.237.81:5554 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Flash security policy server 2019-02-19 15:02:44.149 2880 INFO nova.console.websocketproxy [-] - Web server (no directory listings). Web root: /usr/share/novnc 2019-02-19 15:02:44.150 2880 INFO nova.console.websocketproxy [-] - SSL/TLS support 2019-02-19 15:02:44.151 2880 INFO nova.console.websocketproxy [-] - proxying from 192.168.237.81:5554 to None:None 2019-02-19 15:02:45.015 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.184 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.377 2889 DEBUG nova.context [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Found 2 cells: 00000000-0000-0000-0000-000000000000(cell0),9f9825dd-868f-41cc-9c8e-e544f1528d6a(cell1) load_cells /usr/lib/python2.7/site-packages/nova/context.py:479 2019-02-19 15:02:45.380 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.382 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.393 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packag es/oslo_concurrency/lockutils.py:273 2019-02-19 15:02:45.395 2889 DEBUG oslo_concurrency.lockutils [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Lock "9f9825dd-868f-41cc-9c8e-e544f1528d6a" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python2.7/site-packages /oslo_concurrency/lockutils.py:285 2019-02-19 15:02:45.437 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.443 2889 DEBUG oslo_db.sqlalchemy.engines [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTI TUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:308 2019-02-19 15:02:45.451 2889 INFO nova.compute.rpcapi [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] Automatically selected compute RPC version 5.0 from minimum service version 35 2019-02-19 15:02:45.452 2889 INFO nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] handler exception: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 DEBUG nova.console.websocketproxy [req-8552f48d-8c1c-4330-aaec-64d544c6cc1e - - - - -] exception vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy Traceback (most recent call last): 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 928, in top_new_client 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 858, in do_handshake 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.RequestHandlerClass(retsock, address, self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 311, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy websockify.ProxyRequestHandler.__init__(self, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 113, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.__init__(self, req, addr, server) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/SocketServer.py", line 652, in __init__ 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/websockify/websocket.py", line 579, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy SimpleHTTPRequestHandler.handle(self) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.handle_one_request() 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/BaseHTTPServer.py", line 310, in handle_one_request 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy self.raw_requestline = self.rfile.readline(65537) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/socket.py", line 480, in readline 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy data = self._sock.recv(self._rbufsize) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 190, in recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return self._base_recv(buflen, flags, into=False) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 217, in _base_recv 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy read = self.read(nbytes) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 135, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy super(GreenSSLSocket, self).read, *args, **kwargs) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib/python2.7/site-packages/eventlet/green/ssl.py", line 109, in _call_trampolining 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy return func(*a, **kw) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy File "/usr/lib64/python2.7/ssl.py", line 673, in read 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy v = self._sslobj.read(len) 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy error: [Errno 104] Connection reset by peer 2019-02-19 15:02:45.452 2889 ERROR nova.console.websocketproxy 2019-02-19 15:02:47.037 2880 DEBUG nova.console.websocketproxy [-] 192.168.237.85: new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:873 (paste: http://paste.openstack.org/show/745451/) This sequence starting with the "new handler Process" repeats continuously. It seems that the haproxy health checks initiate an SSL connection but then immediately send a TCP RST: http://git.haproxy.org/?p=haproxy.git;a=commit;h=fd29cc537b8511db6e256529ded625c8e7f856d0 For most services this does not seem to be an issue, but for nova- novncproxy it repeatedly initializes NovaProxyRequestHandler which creates a full nova.compute.rpcapi.ComputeAPI instance which very quickly starts to consume significant CPU and overtake the host. Note that we tried upgrading to HEAD of websockify and eventlet which did not improve the issue. Our workaround was to turn off check-ssl in haproxy and use a basic tcp check, but we're concerned that nova-novncproxy remains vulnerable to a DOS attack given how easy it is for haproxy to overload the service. For that reason I'm opening this initially as a security bug, though you could perhaps argue that it's no secret that making un-ratelimited requests at any service will cause high load. Steps to reproduce ================== 1. Configure nova-novncproxy to use SSL by setting the cert= and key= parameters in [DEFAULT] and turn on debug logging. 2. You can simulate the haproxy SSL health check with this python script:     import socket, ssl, struct, time     host = '192.168.237.81'     port = 5554     while True:         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)         ssl_sock = ssl.wrap_socket(sock)         ssl_sock.connect((host, port))         ssl_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))         sock.close()         time.sleep(2) Expected result =============== nova-novncproxy should gracefully handle the RST and not start overutilizing CPU. It should probably hold off on initializing database connections and such until a meaningful request other than an SSL HELLO is received. Actual result ============= The nova-novncproxy process quickly jumps to the top of the CPU% metrics of process analyzers like top and htop and if left unattended on a server with few cores will cause the server's overall performance to be degraded. Environment =========== We found this on the latest of the stable/rocky branch on SLES: # cat /etc/os-release NAME="SLES" VERSION="12-SP4" VERSION_ID="12.4" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4" # uname -a Linux d52-54-77-77-01-01 4.12.14-95.6-default #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8) x86_64 x86_64 x86_64 GNU/Linux # zypper info openstack-nova Information for package openstack-nova: --------------------------------------- Repository : Cloud Name : openstack-nova Version : 18.1.1~dev47-749.1 Arch : noarch Vendor : obs://build.suse.de/Devel:Cloud Support Level : Level 3 Installed Size : 444.7 KiB Installed : Yes Status : up-to-date Source package : openstack-nova-18.1.1~dev47-749.1.src Summary : OpenStack Compute (Nova) # zypper info haproxy Information for package haproxy: -------------------------------- Repository : Cloud Name : haproxy Version : 1.6.11-10.2 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.1 MiB Installed : Yes Status : up-to-date Source package : haproxy-1.6.11-10.2.src To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1816727/+subscriptions From bcafarel at redhat.com Wed Jun 19 14:54:14 2019 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Wed, 19 Jun 2019 14:54:14 -0000 Subject: [Openstack-security] [Bug 1824248] Re: Security Group filtering hides rules from user References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <156095605555.12085.4610326821203740467.launchpad@soybean.canonical.com> ** Tags added: neutron-proactive-backport-potential -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 411299920 at qq.com Tue Jun 25 00:59:02 2019 From: 411299920 at qq.com (ZhangChi) Date: Tue, 25 Jun 2019 00:59:02 -0000 Subject: [Openstack-security] [Bug 1824248] Re: Security Group filtering hides rules from user References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <156142434287.17726.15128987785206724441.malone@wampee.canonical.com> https://review.opendev.org/#/c/660174/ This solution bp can result in another bug as follows: https://review.opendev.org/#/c/660174/ -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions From 411299920 at qq.com Tue Jun 25 01:01:17 2019 From: 411299920 at qq.com (ZhangChi) Date: Tue, 25 Jun 2019 01:01:17 -0000 Subject: [Openstack-security] [Bug 1824248] Re: Security Group filtering hides rules from user References: <155493822208.21486.11348171578680982627.malonedeb@soybean.canonical.com> Message-ID: <156142447782.17582.8152875126352336939.malone@gac.canonical.com> https://review.opendev.org/#/c/660174/ This solution patch can result in another bug as follows: https://bugs.launchpad.net/tricircle/+bug/1831848 -- You received this bug notification because you are a member of OpenStack Security SIG, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1824248 Title: Security Group filtering hides rules from user Status in neutron: Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Manage Rules part of the GUI hides the rules currently visible in the Launch Instance modal window. It allows a malicious admin to add backdoor access rules that might be later added to VMs without the knowledge of owner of those VMs. When sending GET request as below, it responds only with the rules that are created by user and this happens when using Manage Rules part of the GUI: On the other hand when using GET request as below, it responds with all SG and it includes all rules, and there is no filtering and this is used in Launch Instance modal window: Here is example of rules display in Manage Rules part of GUI: > /opt/stack/horizon/openstack_dashboard/dashboards/project/security_groups/views.py(50)_get_data() -> return api.neutron.security_group_get(self.request, sg_id) (Pdb) l  45 @memoized.memoized_method  46 def _get_data(self):  47 sg_id = filters.get_int_or_uuid(self.kwargs['security_group_id'])  48 try:  49 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  50 -> return api.neutron.security_group_get(self.request, sg_id)  51 except Exception:  52 redirect = reverse('horizon:project:security_groups:index')  53 exceptions.handle(self.request,  54 _('Unable to retrieve security group.'),  55 redirect=redirect) (Pdb) p api.neutron.security_group_get(self.request, sg_id) , , , ]}> (Pdb) (Pdb) p self.request As you might have noticed there are no ports access 44 and 22 (SSH) And from the Launch Instance Modal Window, as well as CLI we can see that there are two more rules that are invisible for user, port 44 and 22 (SSH) as displayed below: > /opt/stack/horizon/openstack_dashboard/api/rest/network.py(47)get() -> return {'items': [sg.to_dict() for sg in security_groups]} (Pdb) l  42 """  43  44 security_groups = api.neutron.security_group_list(request)  45 from remote_pdb import RemotePdb; RemotePdb('127.0.0.1', 444).set_trace()  46  47 -> return {'items': [sg.to_dict() for sg in security_groups]}  48  49  50 @urls.register  51 class FloatingIP(generic.View):  52 """API for a single floating IP address.""" (Pdb) p security_groups [, , , , , ]}>] (Pdb) (Pdb) p request Thank you, Robin To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1824248/+subscriptions