From morgan.fainberg at gmail.com Mon Dec 1 20:03:05 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 01 Dec 2014 20:03:05 -0000 Subject: [Openstack-security] [Bug 1396849] Re: internalURL and adminURL of endpoints should not be visible to ordinary user References: <20141127023956.32696.46752.malonedeb@gac.canonical.com> Message-ID: <20141201200305.32348.40341.malone@chaenomeles.canonical.com> Based on the ML topic, and that admin/internal URL is not universal (nor clearly isolated) this is not something that we can likely fix without breaking the API contract. We could look at changing the format of the catalog, but I think this is a much, much, bigger topic. Many actions need access to the different interfaces to succeed. Second, if someone does not have the endpoint in the catalog it doesn't prevent them from accessing/using the endpoint if they know if apriori. This is not something that I expect we will change. This should be handled in policy enforcement (currently policy.son) Longer term we are looking at providing endpoint binding - in theory we could expand this to cover the differing interfaces *where* possible. Feel free to comment at https://review.openstack.org/#/c/123726/ on the token constraint specification which will include the ability to restrict the user from accessing a specific endpoint if they are not authorized to do-so. ** Changed in: keystone Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1396849 Title: internalURL and adminURL of endpoints should not be visible to ordinary user Status in OpenStack Identity (Keystone): Won't Fix Bug description: if an ordinary user sent a get-token request to KeyStone, internalURL and adminURL of endpoints will also be returned. It'll expose the internal high privilege access address to the ordinary user, and leads to the risk for malicious user to attack or hijack the system. the request to get token for ordinary user: curl -d '{"auth":{"passwordCredentials":{"username": "huawei", "password": "2014"},"tenantName":"huawei"}}' -H "Content-type: application/json" http://localhost:5000/v2.0/tokens the response: {"access": {"token": {"issued_at": "2014-11-27T02:30:59.218772", "expires": "2014-11-27T03:30:59Z", "id": "b8684d2b68ab49d5988da9197f38a878", "tenant": {"description": "normal Tenant", "enabled": true, "id": "7ed3351cd58349659f0bfae002f76a77", "name": "huawei"}, "audit_ids": ["Ejn3BtaBTWSNtlj7beE9bQ"]}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77", "region": "regionOne", "internalURL": "http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77", "id": "170a3ae617a1462c81bffcbc658b7746", "publicURL": "http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://10.67.148.27:9696", "region": "regionOne", "internalURL": "http://10.67.148.27:9696", "id": "7c0f28aa4710438bbd84fd25dbe4daa6", "publicURL": "http://10.67.148.27:9696"}], "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://10.67.148.27:9292", "region": "regionOne", "internalURL": "http://10.67.148.27:9292", "id": "576f41fc8ef14b4f90e516bb45897491", "publicURL": "http://10.67.148.27:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://10.67.148.27:8777", "region": "regionOne", "internalURL": "http://10.67.148.27:8777", "id": "77d464e146f242aca3c50e10b6cfdaa0", "publicURL": "http://10.67.148.27:8777"}], "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://10.67.148.27:6385", "region": "regionOne", "internalURL": "http://10.67.148.27:6385", "id": "1b8177826e0c426fa73e5519c8386589", "publicURL": "http://10.67.148.27:6385"}], "endpoints_links": [], "type": "baremetal", "name": "ironic"}, {"endpoints": [{"adminURL": "http://10.67.148.27:35357/v2.0", "region": "regionOne", "internalURL": "http://10.67.148.27:5000/v2.0", "id": "435ae249fd2a427089cb4bf2e6c0b8e9", "publicURL": "http://10.67.148.27:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "huawei", "roles_links": [], "id": "a88a40a635334e5da2ac3523d9780ed3", "roles": [{"name": "_member_"}], "name": "huawei"}, "metadata": {"is_admin": 0, "roles": ["73b0a1ac6b0c48cb90205c53f2b9e48d"]}}} To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1396849/+subscriptions From d.w.chadwick at kent.ac.uk Mon Dec 1 20:50:37 2014 From: d.w.chadwick at kent.ac.uk (David Chadwick) Date: Mon, 01 Dec 2014 20:50:37 +0000 Subject: [Openstack-security] [Bug 1396849] Re: internalURL and adminURL of endpoints should not be visible to ordinary user In-Reply-To: <20141201200305.32348.40341.malone@chaenomeles.canonical.com> References: <20141127023956.32696.46752.malonedeb@gac.canonical.com> <20141201200305.32348.40341.malone@chaenomeles.canonical.com> Message-ID: <547CD49D.5030403@kent.ac.uk> +1 On 01/12/2014 20:03, Morgan Fainberg wrote: > Based on the ML topic, and that admin/internal URL is not universal (nor > clearly isolated) this is not something that we can likely fix without > breaking the API contract. We could look at changing the format of the > catalog, but I think this is a much, much, bigger topic. Many actions > need access to the different interfaces to succeed. > > Second, if someone does not have the endpoint in the catalog it doesn't > prevent them from accessing/using the endpoint if they know if apriori. > This is not something that I expect we will change. This should be > handled in policy enforcement (currently policy.son) > > Longer term we are looking at providing endpoint binding - in theory we > could expand this to cover the differing interfaces *where* possible. > Feel free to comment at https://review.openstack.org/#/c/123726/ on the > token constraint specification which will include the ability to > restrict the user from accessing a specific endpoint if they are not > authorized to do-so. > > ** Changed in: keystone > Status: New => Won't Fix > From 1031139 at bugs.launchpad.net Mon Dec 1 20:55:40 2014 From: 1031139 at bugs.launchpad.net (melanie witt) Date: Mon, 01 Dec 2014 20:55:40 -0000 Subject: [Openstack-security] [Bug 1031139] Re: quota-show should return error for invalid tenant id References: <20120730234300.27634.40825.malonedeb@gac.canonical.com> Message-ID: <20141201205541.26252.75369.launchpad@gac.canonical.com> ** Summary changed: - quota-show does not handle alternatives for tenant_id as expected + quota-show should return error for invalid tenant id ** Changed in: python-novaclient Importance: Undecided => Medium -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1031139 Title: quota-show should return error for invalid tenant id Status in Python client library for Nova: Confirmed Bug description: quota-show does not handle alternatives for tenant_id as expected ENV: Devstack trunk (Folsom) / nova d56b5fc3ad6dbfc56e0729174925fb146cef87fa , Mon Jul 30 21:59:56 2012 +0000 I'd expect the following command to work as $ env | grep TENANT -> OS_TENANT_NAME=demo $ nova --debug --os_username=admin --os_password=password quota-show usage: nova quota-show error: too few arguments I'd also expect the following to work: $ nova --debug --os_username=admin --os_password=password quota-show --os_tenant_name=demo usage: nova quota-show error: too few arguments What is more awesome, if in the event that I do provide the wrong tenant_id, it proceeds to use OS_TENANT_NAME returning those results: $nova --debug --os_username=admin --os_password=password quota-show gggggggggggggggggggggggggggggggggg REQ: curl -i http://10.1.11.219:8774/v2/04adebe40d214581b84118bcce264f0e/os-quota- sets/ggggggggggggggggggggggggggggggggggg -X GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 10bd3f948df24039b2b88b98771b2b99" +-----------------------------+-------+ | Property | Value | +-----------------------------+-------+ | cores | 20 | | floating_ips | 10 | | gigabytes | 1000 | | injected_file_content_bytes | 10240 | | injected_files | 5 | | instances | 10 | | metadata_items | 128 | | ram | 51200 | | volumes | 10 | +-----------------------------+-------+ I also couldn't figure out how to get the quota-show to work as a member (non-admin) of a project. Let me know if you want any of these issues broken out in to additional bugs. To manage notifications about this bug go to: https://bugs.launchpad.net/python-novaclient/+bug/1031139/+subscriptions From gerrit2 at review.openstack.org Tue Dec 2 07:33:20 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 02 Dec 2014 07:33:20 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I03b9c5c64f4bd8bca78dfc83199ef17d9a7ea5b7 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/130824 Log: commit 41635234366e0c23fe07874c008b9f98774f463b Author: abhishekkekane Date: Tue Oct 21 04:10:57 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I03b9c5c64f4bd8bca78dfc83199ef17d9a7ea5b7 From gerrit2 at review.openstack.org Tue Dec 2 08:24:18 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 02 Dec 2014 08:24:18 +0000 Subject: [Openstack-security] [openstack/neutron] SecurityImpact review request change I3a361d6590d1800b85791f23ac1cdfd79815341b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/130834 Log: commit f02678aa6a5b0c3aae293c2da33e10539628b08f Author: abhishekkekane Date: Tue Oct 21 04:15:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections. Hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Added a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I3a361d6590d1800b85791f23ac1cdfd79815341b From 1394988 at bugs.launchpad.net Tue Dec 2 11:34:30 2014 From: 1394988 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 02 Dec 2014 11:34:30 -0000 Subject: [Openstack-security] [Bug 1394988] Re: Hadoop with auto security group not working References: <20141121132920.28673.42565.malonedeb@wampee.canonical.com> Message-ID: <20141202113430.31731.94225.malone@chaenomeles.canonical.com> Reviewed: https://review.openstack.org/136083 Committed: https://git.openstack.org/cgit/openstack/sahara/commit/?id=7c4ea57548c34a84996ba36fbdf6d63b17bf6cf3 Submitter: Jenkins Branch: master commit 7c4ea57548c34a84996ba36fbdf6d63b17bf6cf3 Author: Sergey Reshetnyak Date: Thu Nov 20 21:15:45 2014 +0300 Open all ports for private network for auto SG Closes-bug: #1394988 Change-Id: I07485c286a501bff52f8c67ffd0cc841814a5c9c ** Changed in: sahara Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1394988 Title: Hadoop with auto security group not working Status in OpenStack Data Processing (Sahara, ex. Savanna): Fix Committed Bug description: ENV: OpenStack with Neutron Steps to reproduce: 1. Create cluster with auto security groups and two or more nodemanagers. 2. Launch pig job with 1Gb input data Result: After ~1 hour job in KILLED state. Reproduced with the following plugins: Vanilla 2 HDP 2 CDH To manage notifications about this bug go to: https://bugs.launchpad.net/sahara/+bug/1394988/+subscriptions From 1393397 at bugs.launchpad.net Tue Dec 2 11:34:37 2014 From: 1393397 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 02 Dec 2014 11:34:37 -0000 Subject: [Openstack-security] [Bug 1393397] Re: Create Spark cluster with auto security group failed References: <20141117121325.31950.16947.malonedeb@soybean.canonical.com> Message-ID: <20141202113437.31762.1384.malone@chaenomeles.canonical.com> Reviewed: https://review.openstack.org/134927 Committed: https://git.openstack.org/cgit/openstack/sahara/commit/?id=c574e64311cd91410024a9bba8602284d1e5061a Submitter: Jenkins Branch: master commit c574e64311cd91410024a9bba8602284d1e5061a Author: Sergey Reshetnyak Date: Mon Nov 17 16:06:44 2014 +0300 Add list of open ports for Spark plugin It's needed for correct working auto security groups Change-Id: I180d5278d8251f946eb3ff294f45cc188cf77e37 Closes-bug: #1393397 ** Changed in: sahara Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1393397 Title: Create Spark cluster with auto security group failed Status in OpenStack Data Processing (Sahara, ex. Savanna): Fix Committed Bug description: Steps to reproduce: Create cluster with auto security groups. Result: Cluster in 'Error' state To manage notifications about this bug go to: https://bugs.launchpad.net/sahara/+bug/1393397/+subscriptions From 1361360 at bugs.launchpad.net Tue Dec 2 13:04:39 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 02 Dec 2014 13:04:39 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141202130439.9888.84162.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/130843 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=04d7a724fdf80db51e73f12c5b8c982db9310742 Submitter: Jenkins Branch: master commit 04d7a724fdf80db51e73f12c5b8c982db9310742 Author: abhishekkekane Date: Tue Oct 21 01:37:42 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I399b812f6d452226fd306c423de8dcea8520d2aa ** Changed in: nova Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: New Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Tue Dec 2 13:37:00 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 02 Dec 2014 13:37:00 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change Ic57b2aceb136e8626388cfe4df72b2f47cb0661c Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/138365 Log: commit 4041d30611baa476d33627f5078d5bcc12ef50eb Author: abhishekkekane Date: Tue Oct 21 02:31:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. DocImpact: Added wsgi_keep_alive option (default=True). In order to maintain the backward compatibility, setting wsgi_keep_alive as True by default. Recommended is set it to False. Conflicts: cinder/wsgi.py etc/cinder/cinder.conf.sample SecurityImpact Closes-Bug: #1361360 Change-Id: Ic57b2aceb136e8626388cfe4df72b2f47cb0661c (cherry picked from commit fc87da7eeb3451e139ee71b31095d0b9093332ce) From gerrit2 at review.openstack.org Tue Dec 2 13:43:55 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 02 Dec 2014 13:43:55 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I399b812f6d452226fd306c423de8dcea8520d2aa Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/138368 Log: commit b17407bea0be530ba81b4592296310644a5943fc Author: abhishekkekane Date: Tue Oct 21 01:37:42 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). Conflicts: nova/tests/unit/test_wsgi.py Note: The required unit-tests are manually added to the below path, as new path for unit-tests is not present in stable/juno release. nova/tests/compute/test_host_api.py SecurityImpact Closes-Bug: #1361360 Change-Id: I399b812f6d452226fd306c423de8dcea8520d2aa (cherry picked from commit 04d7a724fdf80db51e73f12c5b8c982db9310742) From 1361360 at bugs.launchpad.net Tue Dec 2 13:36:57 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 02 Dec 2014 13:36:57 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141202133657.26858.19732.malone@wampee.canonical.com> Fix proposed to branch: stable/icehouse Review: https://review.openstack.org/138365 ** Changed in: cinder/icehouse Status: New => In Progress -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: In Progress Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From 1361360 at bugs.launchpad.net Tue Dec 2 13:43:50 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 02 Dec 2014 13:43:50 -0000 Subject: [Openstack-security] [Bug 1361360] Fix proposed to nova (stable/juno) References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141202134350.9826.89512.malone@soybean.canonical.com> Fix proposed to branch: stable/juno Review: https://review.openstack.org/138368 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: In Progress Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Wed Dec 3 18:13:24 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 03 Dec 2014 18:13:24 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I399b812f6d452226fd306c423de8dcea8520d2aa Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/138811 Log: commit e7379504521ee087c9006b7a2ad5717dd5f06ed2 Author: abhishekkekane Date: Tue Oct 21 01:37:42 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). Conflicts: nova/tests/unit/test_wsgi.py Note: The required unit-tests are manually added to the below path, as new path for unit-tests is not present in stable/icehouse release. nova/tests/compute/test_host_api.py SecurityImpact Closes-Bug: #1361360 Change-Id: I399b812f6d452226fd306c423de8dcea8520d2aa (cherry picked from commit 04d7a724fdf80db51e73f12c5b8c982db9310742) From 1361360 at bugs.launchpad.net Wed Dec 3 18:13:20 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 03 Dec 2014 18:13:20 -0000 Subject: [Openstack-security] [Bug 1361360] Fix proposed to nova (stable/icehouse) References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141203181320.26642.20216.malone@gac.canonical.com> Fix proposed to branch: stable/icehouse Review: https://review.openstack.org/138811 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: In Progress Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Wed Dec 3 19:23:46 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 03 Dec 2014 19:23:46 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I64859ad01120782fb17308aac3abb125597c3ea2 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/115484 Log: commit 298e4d94398405af0205cccd2a0dcb0a14143503 Author: Solly Ross Date: Tue Aug 19 19:21:52 2014 -0400 Add VeNCrypt (TLS/x509) Security Proxy Driver This adds support for using x509/TLS security between the compute node and websocket proxy when using websockify to proxy VNC traffic. In order to use this with x509, an operator would have to set up client keys and certificates, as well as CA certificates, and configure libvirt to pass the appropriate options to QEmu (this is configured globally for libvirt, not by Nova). This is process is documented on the libvirt website. Then, the operator would enable this driver and set the following options in /etc/nova/nova.conf: [console_proxy_tls] client_key = /path/to/client/keyfile client_cert = /path/to/client/cert.pem ca_certs = /path/to/ca/cert.pem SecurityImpact DocImpact Implements bp: websocket-proxy-to-host-security Change-Id: I64859ad01120782fb17308aac3abb125597c3ea2 From davanum at gmail.com Thu Dec 4 18:22:27 2014 From: davanum at gmail.com (Davanum Srinivas (DIMS)) Date: Thu, 04 Dec 2014 18:22:27 -0000 Subject: [Openstack-security] [Bug 1188189] Re: Some server-side 'SSL' communication fails to check certificates (use of HTTPSConnection) References: <20130606134102.14097.28030.malonedeb@soybean.canonical.com> Message-ID: <20141204182231.25913.22734.launchpad@wampee.canonical.com> ** Changed in: oslo.vmware Status: New => Fix Committed ** Changed in: oslo.vmware Importance: Undecided => Medium ** Changed in: oslo.vmware Assignee: (unassigned) => Davanum Srinivas (DIMS) (dims-v) -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1188189 Title: Some server-side 'SSL' communication fails to check certificates (use of HTTPSConnection) Status in Cinder: In Progress Status in OpenStack Identity (Keystone): Fix Released Status in OpenStack Neutron (virtual network service): In Progress Status in Oslo VMware library for OpenStack projects: Fix Committed Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: Fix Released Status in Python client library for Keystone: Fix Released Status in OpenStack Object Storage (Swift): Invalid Bug description: Grant Murphy from Red Hat reported usage of httplib.HTTPSConnection objects. In Python 2.x those do not perform CA checks so client connections are vulnerable to MiM attacks. """ The following files use httplib.HTTPSConnection : keystone/middleware/s3_token.py keystone/middleware/ec2_token.py keystone/common/bufferedhttp.py vendor/python-keystoneclient-master/keystoneclient/middleware/auth_token.py AFAICT HTTPSConnection does not validate server certificates and should be avoided. This is fixed in Python 3, however in 2.X no validation occurs. I suspect this is also applicable to most OpenStack modules that make HTTPS client calls. Similar problems were found in ovirt: https://bugzilla.redhat.com/show_bug.cgi?id=851672 (CVE-2012-3533) With solutions for ovirt: http://gerrit.ovirt.org/#/c/7209/ http://gerrit.ovirt.org/#/c/7249/ """ To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1188189/+subscriptions From gerrit2 at review.openstack.org Thu Dec 4 20:14:18 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Thu, 04 Dec 2014 20:14:18 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I64859ad01120782fb17308aac3abb125597c3ea2 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/115484 Log: commit 09a1bba0680c1b50e5bdad4badd136dd9597118c Author: Solly Ross Date: Tue Aug 19 19:21:52 2014 -0400 Add VeNCrypt (TLS/x509) Security Proxy Driver This adds support for using x509/TLS security between the compute node and websocket proxy when using websockify to proxy VNC traffic. In order to use this with x509, an operator would have to set up client keys and certificates, as well as CA certificates, and configure libvirt to pass the appropriate options to QEmu (this is configured globally for libvirt, not by Nova). This is process is documented on the libvirt website. Then, the operator would enable this driver and set the following options in /etc/nova/nova.conf: [console_proxy_tls] client_key = /path/to/client/keyfile client_cert = /path/to/client/cert.pem ca_certs = /path/to/ca/cert.pem SecurityImpact DocImpact Implements bp: websocket-proxy-to-host-security Change-Id: I64859ad01120782fb17308aac3abb125597c3ea2 From 1361360 at bugs.launchpad.net Thu Dec 4 21:16:41 2014 From: 1361360 at bugs.launchpad.net (Alan Pevec) Date: Thu, 04 Dec 2014 21:16:41 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141204211641.26252.78838.launchpad@gac.canonical.com> ** Also affects: cinder/juno Importance: Undecided Status: New ** Changed in: cinder/juno Status: New => Fix Committed ** Changed in: cinder/juno Milestone: None => 2014.2.1 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: In Progress Status in Cinder juno series: Fix Committed Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From 1384626 at bugs.launchpad.net Thu Dec 4 22:10:31 2014 From: 1384626 at bugs.launchpad.net (Alan Pevec) Date: Thu, 04 Dec 2014 22:10:31 -0000 Subject: [Openstack-security] [Bug 1384626] Re: SSL certification verification failed when Heat calls Glanceclient with ca cert References: <20141023090711.3131.60470.malonedeb@wampee.canonical.com> Message-ID: <20141204221034.26058.34344.launchpad@wampee.canonical.com> ** Changed in: heat/juno Milestone: None => 2014.2.1 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1384626 Title: SSL certification verification failed when Heat calls Glanceclient with ca cert Status in Orchestration API (Heat): Fix Committed Status in heat juno series: Fix Committed Bug description: Glance server is configured Https. Configured Heat with heat.conf [clients_glance] ca_file= insecure= When trying to create stack, heat will raise exception during heat to load image data. [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed The root cause is that: ca_file as below is a wrong argument to initialize the glance client, it should be cacert which is supported arguments by glanceclient. class GlanceClientPlugin(client_plugin.ClientPlugin): exceptions_module = exc def _create(self): con = self.context endpoint_type = self._get_client_option('glance', 'endpoint_type') endpoint = self.url_for(service_type='image', endpoint_type=endpoint_type) args = { 'auth_url': con.auth_url, 'service_type': 'image', 'project_id': con.tenant, 'token': self.auth_token, 'endpoint_type': endpoint_type, 'ca_file': self._get_client_option('glance', 'ca_file'), 'cert_file': self._get_client_option('glance', 'cert_file'), 'key_file': self._get_client_option('glance', 'key_file'), 'insecure': self._get_client_option('glance', 'insecure') To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1384626/+subscriptions From 1380642 at bugs.launchpad.net Fri Dec 5 03:36:38 2014 From: 1380642 at bugs.launchpad.net (Akihiro Motoki) Date: Fri, 05 Dec 2014 03:36:38 -0000 Subject: [Openstack-security] [Bug 1380642] Re: Horizon should not log token References: <20141013141256.17942.18062.malonedeb@chaenomeles.canonical.com> Message-ID: <20141205033640.26609.82496.launchpad@gac.canonical.com> ** Changed in: horizon Milestone: None => kilo-1 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1380642 Title: Horizon should not log token Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: It is Horizon version of bug 1327019. Various modules in openstack_dashboard/api logs token. In other modules, token value is not logged now and is output as *REDACTED* or some similar string. In Horizon case, these log lines are simply removed to fix the issue as it seems this logging is unnecessary in most cases. I don't think this needs to be private based on the discussion in bug 1327019. def novaclient(request): insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False) cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None) LOG.debug('novaclient connection created using token "%s" and url "%s"' % (request.user.token.id, base.url_for(request, 'compute'))) c = nova_client.Client(request.user.username, request.user.token.id, project_id=request.user.tenant_id, auth_url=base.url_for(request, 'compute'), insecure=insecure, cacert=cacert, http_log_debug=settings.DEBUG) c.client.auth_token = request.user.token.id c.client.management_url = base.url_for(request, 'compute') return c To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1380642/+subscriptions From 1384626 at bugs.launchpad.net Fri Dec 5 08:10:53 2014 From: 1384626 at bugs.launchpad.net (Alan Pevec) Date: Fri, 05 Dec 2014 08:10:53 -0000 Subject: [Openstack-security] [Bug 1384626] Re: SSL certification verification failed when Heat calls Glanceclient with ca cert References: <20141023090711.3131.60470.malonedeb@wampee.canonical.com> Message-ID: <20141205081056.26372.36084.launchpad@gac.canonical.com> ** Changed in: heat/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1384626 Title: SSL certification verification failed when Heat calls Glanceclient with ca cert Status in Orchestration API (Heat): Fix Committed Status in heat juno series: Fix Released Bug description: Glance server is configured Https. Configured Heat with heat.conf [clients_glance] ca_file= insecure= When trying to create stack, heat will raise exception during heat to load image data. [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed The root cause is that: ca_file as below is a wrong argument to initialize the glance client, it should be cacert which is supported arguments by glanceclient. class GlanceClientPlugin(client_plugin.ClientPlugin): exceptions_module = exc def _create(self): con = self.context endpoint_type = self._get_client_option('glance', 'endpoint_type') endpoint = self.url_for(service_type='image', endpoint_type=endpoint_type) args = { 'auth_url': con.auth_url, 'service_type': 'image', 'project_id': con.tenant, 'token': self.auth_token, 'endpoint_type': endpoint_type, 'ca_file': self._get_client_option('glance', 'ca_file'), 'cert_file': self._get_client_option('glance', 'cert_file'), 'key_file': self._get_client_option('glance', 'key_file'), 'insecure': self._get_client_option('glance', 'insecure') To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1384626/+subscriptions From 1361360 at bugs.launchpad.net Fri Dec 5 08:09:23 2014 From: 1361360 at bugs.launchpad.net (Alan Pevec) Date: Fri, 05 Dec 2014 08:09:23 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141205080930.32459.8518.launchpad@chaenomeles.canonical.com> ** Changed in: cinder/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: In Progress Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Fri Dec 5 17:02:38 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 05 Dec 2014 17:02:38 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I64859ad01120782fb17308aac3abb125597c3ea2 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/115484 Log: commit 339605ebc6922bd68888682a7f3fc5dd38c07810 Author: Solly Ross Date: Tue Aug 19 19:21:52 2014 -0400 Add VeNCrypt (TLS/x509) Security Proxy Driver This adds support for using x509/TLS security between the compute node and websocket proxy when using websockify to proxy VNC traffic. In order to use this with x509, an operator would have to set up client keys and certificates, as well as CA certificates, and configure libvirt to pass the appropriate options to QEmu (this is configured globally for libvirt, not by Nova). This is process is documented on the libvirt website. Then, the operator would enable this driver and set the following options in /etc/nova/nova.conf: [console_proxy_tls] client_key = /path/to/client/keyfile client_cert = /path/to/client/cert.pem ca_certs = /path/to/ca/cert.pem SecurityImpact DocImpact Implements bp: websocket-proxy-to-host-security Change-Id: I64859ad01120782fb17308aac3abb125597c3ea2 From gerrit2 at review.openstack.org Sat Dec 6 22:01:06 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Sat, 06 Dec 2014 22:01:06 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change If3f88d8db4a726219573d0f1b65908408e3aa6a9 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/139672 Log: commit 5e0680bc68d3165d738b02758821375b82a250ca Author: Matthew Gilliard Date: Fri Dec 5 16:14:52 2014 +0000 WIP: Adds ssl_overrides for client configurations We want to have a consistent way to apply ssl config to the various http(s) clients the nova creates. Following an ML discussion[1], this is a POC for the approach which has each client using the global options in CONF.ssl.* with with optional local overrides. These are DictOpts in each client's config section, for example: [ssl] ca_file = /etc/ssl/ca_file [glance] ssl_overrides = {ca_file:/etc/ssl/glance_ca_file} The keys which can be overriden are: ca_file, key_file, cert_file. SecurityImpact: SSL config of Nova's Glance client DocImpact: New configuration option as described Change-Id: If3f88d8db4a726219573d0f1b65908408e3aa6a9 From 1361360 at bugs.launchpad.net Sun Dec 7 17:37:00 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Sun, 07 Dec 2014 17:37:00 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141207173700.9954.87489.malone@soybean.canonical.com> Reviewed: https://review.openstack.org/138365 Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=4041d30611baa476d33627f5078d5bcc12ef50eb Submitter: Jenkins Branch: stable/icehouse commit 4041d30611baa476d33627f5078d5bcc12ef50eb Author: abhishekkekane Date: Tue Oct 21 02:31:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. DocImpact: Added wsgi_keep_alive option (default=True). In order to maintain the backward compatibility, setting wsgi_keep_alive as True by default. Recommended is set it to False. Conflicts: cinder/wsgi.py etc/cinder/cinder.conf.sample SecurityImpact Closes-Bug: #1361360 Change-Id: Ic57b2aceb136e8626388cfe4df72b2f47cb0661c (cherry picked from commit fc87da7eeb3451e139ee71b31095d0b9093332ce) ** Changed in: cinder/icehouse Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Mon Dec 8 18:50:56 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 08 Dec 2014 18:50:56 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I64859ad01120782fb17308aac3abb125597c3ea2 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/115484 Log: commit dbd3e8f1fac40922418002b259af98c4631f6ad1 Author: Solly Ross Date: Tue Aug 19 19:21:52 2014 -0400 Add VeNCrypt (TLS/x509) Security Proxy Driver This adds support for using x509/TLS security between the compute node and websocket proxy when using websockify to proxy VNC traffic. In order to use this with x509, an operator would have to set up client keys and certificates, as well as CA certificates, and configure libvirt to pass the appropriate options to QEmu (this is configured globally for libvirt, not by Nova). This is process is documented on the libvirt website. Then, the operator would enable this driver and set the following options in /etc/nova/nova.conf: [console_proxy_tls] client_key = /path/to/client/keyfile client_cert = /path/to/client/cert.pem ca_certs = /path/to/ca/cert.pem SecurityImpact DocImpact Implements bp: websocket-proxy-to-host-security Change-Id: I64859ad01120782fb17308aac3abb125597c3ea2 From gerrit2 at review.openstack.org Mon Dec 8 19:21:18 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 08 Dec 2014 19:21:18 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I8e46d41164e9478b820cad569ba82f25de244620 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/124296 Log: commit cff14b3763df7515405552b56e96f11765c56c74 Author: melanie witt Date: Fri Sep 26 05:15:16 2014 +0000 replace httplib.HTTPSConnection in EC2KeystoneAuth httplib.HTTPSConnection is known to not verify SSL certificates in Python 2.x. This change replaces use of httplib.HTTPSConnection with the requests module. It imports config settings related to SSL verification: ssl.key_file, ssl.cert_file, and ssl.ca_file. It also adds one config setting: keystone_ec2_insecure. By default, SSL verification is on, but can be disabled by setting: keystone_ec2_insecure=true This patch is based on the keystone middleware ec2 token patch: https://review.openstack.org/#/c/76476 SecurityImpact DocImpact Closes-Bug: #1373992 Change-Id: I8e46d41164e9478b820cad569ba82f25de244620 From 1274034 at bugs.launchpad.net Mon Dec 8 21:04:38 2014 From: 1274034 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 08 Dec 2014 21:04:38 -0000 Subject: [Openstack-security] [Bug 1274034] Change abandoned on neutron (master) References: <20140129101504.12361.90017.malonedeb@gac.canonical.com> Message-ID: <20141208210438.25096.78897.malone@wampee.canonical.com> Change abandoned by Kyle Mestery (mestery at mestery.com) on branch: master Review: https://review.openstack.org/119352 Reason: This change is old enough and hasn't seen any updates since September 5, 2014. Abandoning it, please revive it if you plan to work on it again. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1274034 Title: Neutron firewall anti-spoofing does not prevent ARP poisoning Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Security Advisories: Invalid Status in OpenStack Security Notes: Fix Released Bug description: The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache poisoning. When anti-spoofing rules are handled by Nova, a list of rules was added through the libvirt network filter feature: - no-mac-spoofing - no-ip-spoofing - no-arp-spoofing - nova-no-nd-reflection - allow-dhcp-server Actually, the neutron firewall driver 'iptabes_firawall' handles only MAC and IP anti-spoofing rules. This is a security vulnerability, especially on shared networks. Reproduce an ARP cache poisoning and man in the middle: - Create a private network/subnet 10.0.0.0/24 - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 10.0.0.4) - Log on VM1 and install ettercap [1] - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:' - Log on too on VM2 (with VNC/spice console) and ping google.fr => ping is ok - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 instead to be send directly to the network gateway and forwarded by the VM1 to the gw. The ICMP capture looks something like that [2] - Go back to VM2 and check the ARP table => the MAC address associated to the GW is the MAC address of VM1 [1] http://ettercap.github.io/ettercap/ [2] http://paste.openstack.org/show/62112/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions From 1274034 at bugs.launchpad.net Mon Dec 8 21:11:47 2014 From: 1274034 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 08 Dec 2014 21:11:47 -0000 Subject: [Openstack-security] [Bug 1274034] Change abandoned on neutron (master) References: <20140129101504.12361.90017.malonedeb@gac.canonical.com> Message-ID: <20141208211147.18427.83786.malone@gac.canonical.com> Change abandoned by Kyle Mestery (mestery at mestery.com) on branch: master Review: https://review.openstack.org/119976 Reason: This change is old enough and hasn't seen any updates since September 10, 2014. Abandoning it, please revive it if you plan to work on it again. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1274034 Title: Neutron firewall anti-spoofing does not prevent ARP poisoning Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Security Advisories: Invalid Status in OpenStack Security Notes: Fix Released Bug description: The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache poisoning. When anti-spoofing rules are handled by Nova, a list of rules was added through the libvirt network filter feature: - no-mac-spoofing - no-ip-spoofing - no-arp-spoofing - nova-no-nd-reflection - allow-dhcp-server Actually, the neutron firewall driver 'iptabes_firawall' handles only MAC and IP anti-spoofing rules. This is a security vulnerability, especially on shared networks. Reproduce an ARP cache poisoning and man in the middle: - Create a private network/subnet 10.0.0.0/24 - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 10.0.0.4) - Log on VM1 and install ettercap [1] - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:' - Log on too on VM2 (with VNC/spice console) and ping google.fr => ping is ok - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 instead to be send directly to the network gateway and forwarded by the VM1 to the gw. The ICMP capture looks something like that [2] - Go back to VM2 and check the ARP table => the MAC address associated to the GW is the MAC address of VM1 [1] http://ettercap.github.io/ettercap/ [2] http://paste.openstack.org/show/62112/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions From fungi at yuggoth.org Tue Dec 9 00:34:36 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 09 Dec 2014 00:34:36 -0000 Subject: [Openstack-security] [Bug 1400443] Re: Keystone should support pre-hashed passwords References: <20141208182330.19434.7359.malonedeb@soybean.canonical.com> Message-ID: <20141209003437.18558.53755.malone@gac.canonical.com> This is pretty clearly a security hardening request and not a security vulnerability report, so i've adjusted its classification accordingly. ** Also affects: ossa Importance: Undecided Status: New ** Tags added: security ** Information type changed from Public Security to Public ** Changed in: ossa Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400443 Title: Keystone should support pre-hashed passwords Status in OpenStack Identity (Keystone): Incomplete Status in OpenStack Security Advisories: Won't Fix Bug description: Passwords should be allowed to be pre-hashed upon user creation for better security. A user may want to store passwords in a script file, and it is much safer for these to be hashed beforehand so that the password is not in plaintext. This was implemented in keystone at one point https://git.openstack.org/cgit/openstack/keystone/commit/?id=e492bbc68ef41b276a0a18c6dbeda242d46b66f4 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1400443/+subscriptions From farazhyder55 at gmail.com Tue Dec 9 04:17:57 2014 From: farazhyder55 at gmail.com (Muhammad Faraz Hyder) Date: Tue, 9 Dec 2014 09:17:57 +0500 Subject: [Openstack-security] Virtualization of TPM in QEMU Message-ID: Is there anyone who has virtualized the TPM using KVM/QEMU Hypervisor. I am trying to use IBM software TPM and trying to virtualize it to the VMs , but unable to do so. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Tue Dec 9 09:49:28 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Tue, 9 Dec 2014 09:49:28 +0000 Subject: [Openstack-security] Virtualization of TPM in QEMU In-Reply-To: References: Message-ID: <20141209094928.GC29167@redhat.com> On Tue, Dec 09, 2014 at 09:17:57AM +0500, Muhammad Faraz Hyder wrote: > Is there anyone who has virtualized the TPM using KVM/QEMU Hypervisor. > > I am trying to use IBM software TPM and trying to virtualize it to the VMs > , but unable to do so. QEMU has TPM device emulation, but the backend for the emulation must be a real TPM in the host. As such only a single guest can have a virtual TPM on each host. This basically it essentially useless as a feature for the cloud. There was work to allow the virtual TPM to be backed by a custom data store, so that all guests on a host could have this functionality, but it was never merged upstream in QEMU Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From gerrit2 at review.openstack.org Tue Dec 9 11:43:12 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 09 Dec 2014 11:43:12 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change If492810a2f10fa5954f8c8bb708b14be0b77fb90 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/140304 Log: commit 57ea636e0fcef64436ceacce36c7aedb3bd23819 Author: Stuart McLaren Date: Fri Sep 5 12:48:04 2014 +0000 Add client_socket_timeout option Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added client_socket_timeout option (default=900). SecurityImpact Conflicts: cinder/wsgi.py etc/cinder/cinder.conf.sample Change-Id: If492810a2f10fa5954f8c8bb708b14be0b77fb90 Closes-bug: #1361360 Closes-bug: #1371022 (cherry picked from commit 08bfa77aeccb8ca589e3fb5cf9771879818f59de) From 1361360 at bugs.launchpad.net Tue Dec 9 11:43:07 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 09 Dec 2014 11:43:07 -0000 Subject: [Openstack-security] [Bug 1361360] Fix proposed to cinder (stable/juno) References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141209114307.19505.5993.malone@soybean.canonical.com> Fix proposed to branch: stable/juno Review: https://review.openstack.org/140304 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Committed Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From farazhyder55 at gmail.com Tue Dec 9 12:01:20 2014 From: farazhyder55 at gmail.com (Muhammad Faraz Hyder) Date: Tue, 9 Dec 2014 17:01:20 +0500 Subject: [Openstack-security] Virtualization of TPM in QEMU In-Reply-To: <20141209094928.GC29167@redhat.com> References: <20141209094928.GC29167@redhat.com> Message-ID: What about KVM. Can't we virtualized IBM Software TPM in it too? On Tue, Dec 9, 2014 at 2:49 PM, Daniel P. Berrange wrote: > On Tue, Dec 09, 2014 at 09:17:57AM +0500, Muhammad Faraz Hyder wrote: > > Is there anyone who has virtualized the TPM using KVM/QEMU Hypervisor. > > > > I am trying to use IBM software TPM and trying to virtualize it to the > VMs > > , but unable to do so. > > QEMU has TPM device emulation, but the backend for the emulation must be a > real TPM in the host. As such only a single guest can have a virtual TPM > on each host. This basically it essentially useless as a feature for the > cloud. There was work to allow the virtual TPM to be backed by a custom > data store, so that all guests on a host could have this functionality, > but it was never merged upstream in QEMU > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Tue Dec 9 12:04:17 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Tue, 9 Dec 2014 12:04:17 +0000 Subject: [Openstack-security] Virtualization of TPM in QEMU In-Reply-To: References: <20141209094928.GC29167@redhat.com> Message-ID: <20141209120417.GH29167@redhat.com> On Tue, Dec 09, 2014 at 05:01:20PM +0500, Muhammad Faraz Hyder wrote: > What about KVM. Can't we virtualized IBM Software TPM in it too? QEMU & KVM are one and the same thing in terms of devices emulated. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From gerrit2 at review.openstack.org Tue Dec 9 17:59:04 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 09 Dec 2014 17:59:04 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I64859ad01120782fb17308aac3abb125597c3ea2 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/115484 Log: commit 977e6604f4b6ddcfe6f468c129cc3f4834ce2ac5 Author: Solly Ross Date: Tue Aug 19 19:21:52 2014 -0400 Add VeNCrypt (TLS/x509) Security Proxy Driver This adds support for using x509/TLS security between the compute node and websocket proxy when using websockify to proxy VNC traffic. In order to use this with x509, an operator would have to set up client keys and certificates, as well as CA certificates, and configure libvirt to pass the appropriate options to QEmu (this is configured globally for libvirt, not by Nova). This is process is documented on the libvirt website. Then, the operator would enable this driver and set the following options in /etc/nova/nova.conf: [console_proxy_tls] client_key = /path/to/client/keyfile client_cert = /path/to/client/cert.pem ca_certs = /path/to/ca/cert.pem SecurityImpact DocImpact Implements bp: websocket-proxy-to-host-security Change-Id: I64859ad01120782fb17308aac3abb125597c3ea2 From gary.w.smith at hp.com Tue Dec 9 21:02:16 2014 From: gary.w.smith at hp.com (Gary W. Smith) Date: Tue, 09 Dec 2014 21:02:16 -0000 Subject: [Openstack-security] [Bug 1400872] Re: Show password feature should be configurable References: <20141209205525.24719.53334.malonedeb@wampee.canonical.com> Message-ID: <20141209210218.31454.24740.launchpad@chaenomeles.canonical.com> ** Changed in: horizon Status: New => Confirmed ** Tags added: security -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400872 Title: Show password feature should be configurable Status in OpenStack Dashboard (Horizon): Confirmed Bug description: Horizon allows the password field to be displayed in plain text. This introduces a potential security risk. Imagine a user leaving their desktop unlock, if the user saved their password on the browser, a malicious user could go into the Login page and display the Openstack password. The show password feature should be made configurable for operators who wants a more secure deployment of Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1400872/+subscriptions From gary.w.smith at hp.com Tue Dec 9 21:02:57 2014 From: gary.w.smith at hp.com (Gary W. Smith) Date: Tue, 09 Dec 2014 21:02:57 -0000 Subject: [Openstack-security] [Bug 1400872] Re: Show password feature should be configurable References: <20141209205525.24719.53334.malonedeb@wampee.canonical.com> Message-ID: <20141209210259.24621.34752.launchpad@wampee.canonical.com> ** Changed in: horizon Importance: Undecided => High -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400872 Title: Show password feature should be configurable Status in OpenStack Dashboard (Horizon): Confirmed Bug description: Horizon allows the password field to be displayed in plain text. This introduces a potential security risk. Imagine a user leaving their desktop unlock, if the user saved their password on the browser, a malicious user could go into the Login page and display the Openstack password. The show password feature should be made configurable for operators who wants a more secure deployment of Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1400872/+subscriptions From fungi at yuggoth.org Tue Dec 9 21:18:00 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 09 Dec 2014 21:18:00 -0000 Subject: [Openstack-security] [Bug 1400872] Re: Show password feature should be configurable References: <20141209205525.24719.53334.malonedeb@wampee.canonical.com> Message-ID: <20141209211800.19302.44012.malone@soybean.canonical.com> Pretty sure this is a security hardening opportunity, not a vulnerability for which we would publish an advisory, and so I have classified it accordingly. ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New => Won't Fix ** Information type changed from Public Security to Public -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400872 Title: Show password feature should be configurable Status in OpenStack Dashboard (Horizon): Confirmed Status in OpenStack Security Advisories: Won't Fix Bug description: Horizon allows the password field to be displayed in plain text. This introduces a potential security risk. Imagine a user leaving their desktop unlock, if the user saved their password on the browser, a malicious user could go into the Login page and display the Openstack password. The show password feature should be made configurable for operators who wants a more secure deployment of Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1400872/+subscriptions From 1372635 at bugs.launchpad.net Wed Dec 10 05:30:17 2014 From: 1372635 at bugs.launchpad.net (John Griffith) Date: Wed, 10 Dec 2014 05:30:17 -0000 Subject: [Openstack-security] [Bug 1372635] Re: MITM vulnerability with EMC VMAX driver References: <20140922201512.25143.9622.malonedeb@soybean.canonical.com> Message-ID: <20141210053017.19171.10634.malone@soybean.canonical.com> @Matt Excellent questions... -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1372635 Title: MITM vulnerability with EMC VMAX driver Status in Cinder: In Progress Status in OpenStack Security Advisories: Won't Fix Bug description: The EMC VMAX driver in Juno appears to blindly trust whatever certificate it gets back from the device without any validation (it does not specify the ca_certs parameter, etc. on WBEMConnection.__init__). This would leave it open to a MITM attack. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1372635/+subscriptions From sean at dague.net Wed Dec 10 20:08:22 2014 From: sean at dague.net (Sean Dague) Date: Wed, 10 Dec 2014 20:08:22 -0000 Subject: [Openstack-security] [Bug 1381365] Re: SSL Version and cipher selection not possible References: <20141015072233.17942.25827.malonedeb@chaenomeles.canonical.com> Message-ID: <20141210200822.31310.28149.malone@chaenomeles.canonical.com> The distro fix for this issue was a patched python that removes the bad SSL versions. I'm not convinced we should be in the business of working around that at the openstack layer. ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1381365 Title: SSL Version and cipher selection not possible Status in Cinder: New Status in OpenStack Image Registry and Delivery Service (Glance): New Status in OpenStack Identity (Keystone): Confirmed Status in OpenStack Compute (Nova): Won't Fix Status in OpenStack Security Advisories: Won't Fix Bug description: We configure keystone to use SSL always. Due to the poodle issue, I was trying to configure keystone to disable SSLv3 completely. http://googleonlinesecurity.blogspot.fi/2014/10/this-poodle-bites-exploiting-ssl-30.html https://www.openssl.org/~bodo/ssl-poodle.pdf It seems that keystone has no support for configring SSL versions, nor ciphers. If I'm not mistaken the relevant code is in the start function in common/environment/eventlet_server.py It calls eventlet.wrap_ssl but with no SSL version nor cipher options. Since the interface is identical, I assume it uses ssl.wrap_socket. The default here seems to be PROTOCOL_SSLv23 (SSL2 disabled), which would make this vulnerable to the poodle issue. SSL conifgs should probably be possible to be set in the config file (with sane defaults), so that current and newly detected weak ciphers can be disabled without code changes. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1381365/+subscriptions From sean at dague.net Wed Dec 10 22:28:59 2014 From: sean at dague.net (Sean Dague) Date: Wed, 10 Dec 2014 22:28:59 -0000 Subject: [Openstack-security] [Bug 1370295] Re: Possible SQL Injection vulnerability in hyperv volumeutils2 References: <20140917005121.25418.63480.malonedeb@soybean.canonical.com> Message-ID: <20141210222859.25162.2779.malone@wampee.canonical.com> The patch is stalled. And honestly WQL is not really exposed in a substantial way here. ** Tags added: hyperv ** Changed in: nova Status: In Progress => Opinion ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1370295 Title: Possible SQL Injection vulnerability in hyperv volumeutils2 Status in OpenStack Compute (Nova): Opinion Status in OpenStack Security Advisories: Won't Fix Bug description: This line: https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeutilsv2.py#L54 makes a raw SQL query using input from target_address and target_port. If an attacker is able to manipulate either of these parameters, they can exploit a SQL injection vulnerability. If neither of these parameters can be controlled by an attacker, it's probably OK to fix this in public. These should definitely at least be strengthened by using prepared statements, or even better, a secure SQL library such as sqlalchemy. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1370295/+subscriptions From gerrit2 at review.openstack.org Thu Dec 11 16:56:33 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Thu, 11 Dec 2014 16:56:33 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/141101 Log: commit cc4a31358a0979c009b75812f6776b4ba6dd99f8 Author: Brant Knudson Date: Thu Dec 11 10:40:16 2014 -0600 Fix disabling entities when enabled is ignored When LDAP is configured so that the `enabled` attribute was ignored for an entity (user, group, role, project) and a client attempts to disable the entity, it remains enabled, so a user might think that the entity was disabled when it's not. With this change, attempting to disable an entity where `enabled` is ignored will return a 403 Forbidden error. Closes-Bug: #1241134 SecurityImpact This is for security hardening. Change-Id: I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 From 1274034 at bugs.launchpad.net Thu Dec 11 18:58:33 2014 From: 1274034 at bugs.launchpad.net (OpenStack Infra) Date: Thu, 11 Dec 2014 18:58:33 -0000 Subject: [Openstack-security] [Bug 1274034] Fix proposed to neutron (master) References: <20140129101504.12361.90017.malonedeb@gac.canonical.com> Message-ID: <20141211185833.24781.58106.malone@wampee.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/141130 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1274034 Title: Neutron firewall anti-spoofing does not prevent ARP poisoning Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Security Advisories: Invalid Status in OpenStack Security Notes: Fix Released Bug description: The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache poisoning. When anti-spoofing rules are handled by Nova, a list of rules was added through the libvirt network filter feature: - no-mac-spoofing - no-ip-spoofing - no-arp-spoofing - nova-no-nd-reflection - allow-dhcp-server Actually, the neutron firewall driver 'iptabes_firawall' handles only MAC and IP anti-spoofing rules. This is a security vulnerability, especially on shared networks. Reproduce an ARP cache poisoning and man in the middle: - Create a private network/subnet 10.0.0.0/24 - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 10.0.0.4) - Log on VM1 and install ettercap [1] - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:' - Log on too on VM2 (with VNC/spice console) and ping google.fr => ping is ok - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 instead to be send directly to the network gateway and forwarded by the VM1 to the gw. The ICMP capture looks something like that [2] - Go back to VM2 and check the ARP table => the MAC address associated to the GW is the MAC address of VM1 [1] http://ettercap.github.io/ettercap/ [2] http://paste.openstack.org/show/62112/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions From gerrit2 at review.openstack.org Thu Dec 11 22:15:14 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Thu, 11 Dec 2014 22:15:14 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/141101 Log: commit e9a2f8cfc5c9535db1c04d3cb19176405dfd9b84 Author: Brant Knudson Date: Thu Dec 11 10:40:16 2014 -0600 Fix disabling entities when enabled is ignored When LDAP is configured so that the `enabled` attribute was ignored for an entity (user, group, role, project) and a client attempts to disable the entity, it remains enabled, so a user might think that the entity was disabled when it's not. With this change, attempting to disable an entity where `enabled` is ignored will return a 403 Forbidden error. Closes-Bug: #1241134 SecurityImpact This is for security hardening. Change-Id: I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 From gerrit2 at review.openstack.org Thu Dec 11 22:58:36 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Thu, 11 Dec 2014 22:58:36 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/141101 Log: commit 3ad2c057eac4e927f8bb5eac5e845d96512d1eee Author: Brant Knudson Date: Thu Dec 11 10:40:16 2014 -0600 Fix disabling entities when enabled is ignored When LDAP is configured so that the `enabled` attribute was ignored for an entity (user, group, role, project) and a client attempts to disable the entity, it remains enabled, so a user might think that the entity was disabled when it's not. With this change, attempting to disable an entity where `enabled` is ignored will return a 403 Forbidden error. Closes-Bug: #1241134 SecurityImpact This is for security hardening. Change-Id: I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 From gerrit2 at review.openstack.org Fri Dec 12 01:38:10 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 12 Dec 2014 01:38:10 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/141101 Log: commit 54e48da48e2e4ff5ad3e7acf639048e5f985d5a0 Author: Brant Knudson Date: Thu Dec 11 10:40:16 2014 -0600 Fix disabling entities when enabled is ignored When LDAP is configured so that the `enabled` attribute was ignored for an entity (user, group, role, project) and a client attempts to disable the entity, it remains enabled, so a user might think that the entity was disabled when it's not. With this change, attempting to disable an entity where `enabled` is ignored will return a 403 Forbidden error. Since entities are always enabled when the `enabled` attribute is ignored, there's no change to reject changes that attempt to enable the entity. Closes-Bug: #1241134 SecurityImpact This is for security hardening. Change-Id: I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 From 1376915 at bugs.launchpad.net Fri Dec 12 10:34:39 2014 From: 1376915 at bugs.launchpad.net (Eoghan Glynn) Date: Fri, 12 Dec 2014 10:34:39 -0000 Subject: [Openstack-security] [Bug 1376915] Re: Access to sensitive audit data is not properly restricted References: <20141002203501.17647.70947.malonedeb@chaenomeles.canonical.com> Message-ID: <20141212103441.30339.45938.launchpad@chaenomeles.canonical.com> ** Changed in: ceilometer Milestone: kilo-1 => kilo-2 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1376915 Title: Access to sensitive audit data is not properly restricted Status in OpenStack Telemetry (Ceilometer): In Progress Status in OpenStack Security Advisories: Won't Fix Bug description: Audit data stored in http.request and http.response meters is not being adequately protected. Admins are allowed to access audit data for all projects rather than just their own. Non-admins are allowed to access audit data for all users within their project rather than just themselves. A non-admin user should not be able to see what other users are doing, and being an admin in project A does not make you an admin in project B. The following blueprints acknowledge the lack of this support. To quote one: "as ceilometer collects more and more different types of data... some of the data collected may be 'privileged' data that only admins should have access to regardless of membership to a tenant (ie. audit data should only be visible to admins)". That day has come, and the implementation of these blueprints is still missing. At this point there is a security hole here (data exposure) which needs to be plugged immediately, either with the implementation of one of these blueprints (which should probably be merged together) or by a less flexible but more easily implemented stopgap measure. Given time constraints and the urgency of closing this hole, I propose the latter, though the blueprints will obviously still be necessary for a more robust and complete solution. https://blueprints.launchpad.net/ceilometer/+spec/advanced-policy-rule and https://blueprints.launchpad.net/ceilometer/+spec/admin-only-api- access and https://blueprints.launchpad.net/ceilometer/+spec/ready- ceilometer-rbac-keystone-v3 To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1376915/+subscriptions From gerrit2 at review.openstack.org Fri Dec 12 15:08:49 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 12 Dec 2014 15:08:49 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/141101 Log: commit e62de2c91b5755149146a47e84e61d3642095998 Author: Brant Knudson Date: Thu Dec 11 10:40:16 2014 -0600 Fix disabling entities when enabled is ignored When LDAP is configured so that the `enabled` attribute was ignored for an entity (user, group, role, project) and a client attempts to disable the entity, it remains enabled, so a user might think that the entity was disabled when it's not. With this change, attempting to disable an entity where `enabled` is ignored will return a 403 Forbidden error. Since entities are always enabled when the `enabled` attribute is ignored, there's no change to reject changes that attempt to enable the entity. Closes-Bug: #1241134 SecurityImpact This is for security hardening. Change-Id: I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 From davanum at gmail.com Mon Dec 15 15:53:35 2014 From: davanum at gmail.com (Davanum Srinivas (DIMS)) Date: Mon, 15 Dec 2014 15:53:35 -0000 Subject: [Openstack-security] [Bug 1328488] Re: oslo apiclient logs sensitive data References: <20140610105701.26336.80011.malonedeb@wampee.canonical.com> Message-ID: <20141215155337.16950.39715.launchpad@soybean.canonical.com> ** Changed in: oslo-incubator Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1328488 Title: oslo apiclient logs sensitive data Status in The Oslo library incubator: Fix Committed Status in OpenStack Security Advisories: Won't Fix Bug description: When trying to clean up the tempest logs in the gate, we leak passwords and keystone tokens everywhere. For instance, python- novaclient logs the auth token. What's more problematic though is that apiclient does the following: def _http_log_req(self, method, url, kwargs): if not self.debug: return string_parts = [ "curl -i", "-X '%s'" % method, "'%s'" % url, ] for element in kwargs['headers']: header = "-H '%s: %s'" % (element, kwargs['headers'][element]) string_parts.append(header) _logger.debug("REQ: %s" % " ".join(string_parts)) if 'data' in kwargs: _logger.debug("REQ BODY: %s\n" % (kwargs['data'])) The argument that it's at DEBUG level doesn't hold, because from the Atlanta operators summit it was clear that *all* operators are running their servers at DEBUG, because OpenStack is impossible to actually troubleshoot at any other logging level. And if you run neutron at debug level, then all your nova credentials are in your logs. This is coupled with the fact that a large amount of users are streaming all their logs directly into logstash. Which means they've now got a potentially public endpoint that lets them search for credentials. We need to stop doing that in a blanket way across OpenStack. To manage notifications about this bug go to: https://bugs.launchpad.net/oslo-incubator/+bug/1328488/+subscriptions From 1362343 at bugs.launchpad.net Mon Dec 15 21:57:12 2014 From: 1362343 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 15 Dec 2014 21:57:12 -0000 Subject: [Openstack-security] [Bug 1362343] Change abandoned on keystone (master) References: <20140827212906.28020.52551.malonedeb@wampee.canonical.com> Message-ID: <20141215215712.31145.17620.malone@chaenomeles.canonical.com> Change abandoned by Morgan Fainberg (morgan.fainberg at gmail.com) on branch: master Review: https://review.openstack.org/117380 Reason: This change is being abandoned because it has a negative score and has not seen an update in > 60 days. Feel free to re-instate this patch (as the author) by using the "restore" button or any member of the core team can re-instate the patch. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1362343 Title: weak digest algorithm for PKI Status in OpenStack Identity (Keystone): In Progress Status in Python client library for Keystone: Fix Released Bug description: The digest algorithm for PKI tokens is the openssl default of sha1. This is a weak algorithm and some security standards require a stronger algorithm such as sha256. Keystone should make the token digest hash algorithm configurable so that deployments can use a stronger algorithm. Also, the default could be stronger. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1362343/+subscriptions From thingee at gmail.com Tue Dec 16 05:37:16 2014 From: thingee at gmail.com (Mike Perez) Date: Tue, 16 Dec 2014 05:37:16 -0000 Subject: [Openstack-security] [Bug 1372635] Re: MITM vulnerability with EMC VMAX driver References: <20140922201512.25143.9622.malonedeb@soybean.canonical.com> Message-ID: <20141216053718.30804.16875.launchpad@chaenomeles.canonical.com> ** Changed in: cinder Milestone: kilo-1 => kilo-2 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1372635 Title: MITM vulnerability with EMC VMAX driver Status in Cinder: In Progress Status in OpenStack Security Advisories: Won't Fix Bug description: The EMC VMAX driver in Juno appears to blindly trust whatever certificate it gets back from the device without any validation (it does not specify the ca_certs parameter, etc. on WBEMConnection.__init__). This would leave it open to a MITM attack. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1372635/+subscriptions From thierry.carrez+lp at gmail.com Tue Dec 16 10:44:02 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Tue, 16 Dec 2014 10:44:02 -0000 Subject: [Openstack-security] [Bug 1316271] Re: Network Security: VM hosts can SSH to compute node References: <20140505190222.27207.36590.malonedeb@gac.canonical.com> Message-ID: <20141216104405.21606.5622.launchpad@gac.canonical.com> ** Changed in: nova Milestone: kilo-1 => kilo-2 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1316271 Title: Network Security: VM hosts can SSH to compute node Status in OpenStack Compute (Nova): In Progress Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: Hi guys, We're still using nova-network and we'll be using it for a while and we noticed that the VM guests can contact the compute nodes on all ports ... The one we're the most preoccupied with is SSH. We've written the following patch in order to isolate the VM guests from the VM hosts. --- linux_net.py.orig 2014-05-05 17:25:10.171746968 +0000 +++ linux_net.py 2014-05-05 18:42:54.569209220 +0000 @@ -805,6 +805,24 @@ @utils.synchronized('lock_gateway', external=True) +def isolate_compute_from_guest(network_ref): + if not network_ref: + return + + iptables_manager.ipv4['filter'].add_rule('INPUT', + '-p tcp -d %s --dport 8775 ' + '-j ACCEPT' % network_ref['dhcp_server']) + iptables_manager.ipv4['filter'].add_rule('FORWARD', + '-p tcp -d %s --dport 8775 ' + '-j ACCEPT' % network_ref['dhcp_server']) + iptables_manager.ipv4['filter'].add_rule('INPUT', + '-d %s ' + '-j DROP' % network_ref['dhcp_server']) + iptables_manager.ipv4['filter'].add_rule('FORWARD', + '-d %s ' + '-j DROP' % network_ref['dhcp_server']) + iptables_manager.apply() + def initialize_gateway_device(dev, network_ref): if not network_ref: return @@ -1046,6 +1064,7 @@ try: _execute('kill', '-HUP', pid, run_as_root=True) _add_dnsmasq_accept_rules(dev) + isolate_compute_from_guest(network_ref) return except Exception as exc: # pylint: disable=W0703 LOG.error(_('Hupping dnsmasq threw %s'), exc) @@ -1098,6 +1117,7 @@ _add_dnsmasq_accept_rules(dev) + isolate_compute_from_guest(network_ref) @utils.synchronized('radvd_start') def update_ra(context, dev, network_ref): To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1316271/+subscriptions From gerrit2 at review.openstack.org Tue Dec 16 15:51:27 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Tue, 16 Dec 2014 15:51:27 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change I64859ad01120782fb17308aac3abb125597c3ea2 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/115484 Log: commit f0bfa976af53c01c9e2517956f3c2bd1f2a5d6a8 Author: Solly Ross Date: Tue Aug 19 19:21:52 2014 -0400 Add VeNCrypt (TLS/x509) Security Proxy Driver This adds support for using x509/TLS security between the compute node and websocket proxy when using websockify to proxy VNC traffic. In order to use this with x509, an operator would have to set up client keys and certificates, as well as CA certificates, and configure libvirt to pass the appropriate options to QEmu (this is configured globally for libvirt, not by Nova). This is process is documented on the libvirt website. Then, the operator would enable this driver and set the following options in /etc/nova/nova.conf: [console_proxy_tls] client_key = /path/to/client/keyfile client_cert = /path/to/client/cert.pem ca_certs = /path/to/ca/cert.pem SecurityImpact DocImpact Implements bp: websocket-proxy-to-host-security Change-Id: I64859ad01120782fb17308aac3abb125597c3ea2 From thierry.carrez+lp at gmail.com Wed Dec 17 09:22:23 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Wed, 17 Dec 2014 09:22:23 -0000 Subject: [Openstack-security] [Bug 1394988] Re: Hadoop with auto security group not working References: <20141121132920.28673.42565.malonedeb@wampee.canonical.com> Message-ID: <20141217092227.16950.61699.launchpad@soybean.canonical.com> ** Changed in: sahara Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1394988 Title: Hadoop with auto security group not working Status in OpenStack Data Processing (Sahara, ex. Savanna): Fix Released Bug description: ENV: OpenStack with Neutron Steps to reproduce: 1. Create cluster with auto security groups and two or more nodemanagers. 2. Launch pig job with 1Gb input data Result: After ~1 hour job in KILLED state. Reproduced with the following plugins: Vanilla 2 HDP 2 CDH To manage notifications about this bug go to: https://bugs.launchpad.net/sahara/+bug/1394988/+subscriptions From thierry.carrez+lp at gmail.com Wed Dec 17 09:23:37 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Wed, 17 Dec 2014 09:23:37 -0000 Subject: [Openstack-security] [Bug 1393397] Re: Create Spark cluster with auto security group failed References: <20141117121325.31950.16947.malonedeb@soybean.canonical.com> Message-ID: <20141217092340.30770.91292.launchpad@chaenomeles.canonical.com> ** Changed in: sahara Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1393397 Title: Create Spark cluster with auto security group failed Status in OpenStack Data Processing (Sahara, ex. Savanna): Fix Released Bug description: Steps to reproduce: Create cluster with auto security groups. Result: Cluster in 'Error' state To manage notifications about this bug go to: https://bugs.launchpad.net/sahara/+bug/1393397/+subscriptions From thierry.carrez+lp at gmail.com Wed Dec 17 09:23:21 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Wed, 17 Dec 2014 09:23:21 -0000 Subject: [Openstack-security] [Bug 1391520] Re: Create CDH cluster with auto security group failed References: <20141111134553.19455.35550.malonedeb@gac.canonical.com> Message-ID: <20141217092324.31048.15777.launchpad@chaenomeles.canonical.com> ** Changed in: sahara Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1391520 Title: Create CDH cluster with auto security group failed Status in OpenStack Data Processing (Sahara, ex. Savanna): Fix Released Status in Sahara juno series: New Bug description: Steps to reproduce: Create cluster with auto security groups. Result: Cluster in 'Configuring' state To manage notifications about this bug go to: https://bugs.launchpad.net/sahara/+bug/1391520/+subscriptions From thierry.carrez+lp at gmail.com Wed Dec 17 09:23:17 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Wed, 17 Dec 2014 09:23:17 -0000 Subject: [Openstack-security] [Bug 1391518] Re: [HDP] Create HDP cluster with auto security group failed References: <20141111134355.14164.2244.malonedeb@wampee.canonical.com> Message-ID: <20141217092320.24634.67982.launchpad@wampee.canonical.com> ** Changed in: sahara Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1391518 Title: [HDP] Create HDP cluster with auto security group failed Status in OpenStack Data Processing (Sahara, ex. Savanna): Fix Released Bug description: Steps to reproduce: Create cluster with auto security groups. Result: Cluster in 'Configuring' state To manage notifications about this bug go to: https://bugs.launchpad.net/sahara/+bug/1391518/+subscriptions From gerrit2 at review.openstack.org Wed Dec 17 14:01:13 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 17 Dec 2014 14:01:13 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change If3f88d8db4a726219573d0f1b65908408e3aa6a9 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/139672 Log: commit 63d392e1e90115c7b07cd8ae76259f2ac151041e Author: Matthew Gilliard Date: Fri Dec 5 16:14:52 2014 +0000 WIP: Adds ssl_overrides for client configurations We want to have a consistent way to apply ssl config to the various http(s) clients the nova creates. Following an ML discussion[1], this is a POC for the approach which has each client using the global options in CONF.ssl.* with optional local overrides. These are DictOpts in each client's config section, for example: [ssl] ca_file = /etc/ssl/ca_file [glance] ssl_overrides = {ca_file:/etc/ssl/glance_ca_file} The keys which can be overriden are: ca_file, key_file, cert_file. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-December/052175.html SecurityImpact: SSL config of Nova's Glance client DocImpact: New configuration option as described Change-Id: If3f88d8db4a726219573d0f1b65908408e3aa6a9 From gerrit2 at review.openstack.org Wed Dec 17 14:01:42 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 17 Dec 2014 14:01:42 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change If3f88d8db4a726219573d0f1b65908408e3aa6a9 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/139672 Log: commit 4bde0cbdf961bbbab8a056a9cf4c57ed8733f551 Author: Matthew Gilliard Date: Fri Dec 5 16:14:52 2014 +0000 Adds ssl_overrides for client configurations We want to have a consistent way to apply ssl config to the various http(s) clients the nova creates. Following an ML discussion[1], this is a POC for the approach which has each client using the global options in CONF.ssl.* with optional local overrides. These are DictOpts in each client's config section, for example: [ssl] ca_file = /etc/ssl/ca_file [glance] ssl_overrides = {ca_file:/etc/ssl/glance_ca_file} The keys which can be overriden are: ca_file, key_file, cert_file. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-December/052175.html SecurityImpact: SSL config of Nova's Glance client DocImpact: New configuration option as described Change-Id: If3f88d8db4a726219573d0f1b65908408e3aa6a9 From gerrit2 at review.openstack.org Wed Dec 17 19:15:56 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 17 Dec 2014 19:15:56 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/142554 Log: commit 6b6be744214e81f0aab9b5c6d5040ec779aea036 Author: Brant Knudson Date: Thu Dec 11 10:40:16 2014 -0600 Fix disabling entities when enabled is ignored When LDAP is configured so that the `enabled` attribute was ignored for an entity (user, group, role, project) and a client attempts to disable the entity, it remains enabled, so a user might think that the entity was disabled when it's not. With this change, attempting to disable an entity where `enabled` is ignored will return a 403 Forbidden error. Since entities are always enabled when the `enabled` attribute is ignored, there's no change to reject changes that attempt to enable the entity. Closes-Bug: #1241134 SecurityImpact This is for security hardening. (cherry picked from commit e62de2c91b5755149146a47e84e61d3642095998) Conflicts: keystone/tests/test_backend_ldap.py Backport note: Conflict was because some tests were moved to unit tests in Kilo. Change-Id: I8cb3326952d6e379a457c19d7f8f5f9ee4b29eb0 From nkinder at redhat.com Thu Dec 18 06:46:27 2014 From: nkinder at redhat.com (Nathan Kinder) Date: Thu, 18 Dec 2014 06:46:27 -0000 Subject: [Openstack-security] [Bug 1341954] Re: suds client subject to cache poisoning by local attacker References: <20140715043528.2209.47100.malonedeb@gac.canonical.com> Message-ID: <20141218064628.31013.39052.malone@chaenomeles.canonical.com> This has been published as OSSN-0038 to the openstack and openstack-dev mailing lists as well as the wiki: https://wiki.openstack.org/wiki/OSSN/OSSN-0038 ** Changed in: ossn Status: In Progress => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1341954 Title: suds client subject to cache poisoning by local attacker Status in Cinder: Fix Released Status in Cinder havana series: Fix Released Status in Cinder icehouse series: Fix Released Status in Gantt: New Status in OpenStack Compute (Nova): Fix Released Status in Oslo VMware library for OpenStack projects: Fix Released Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: The suds project appears to be largely unmaintained upstream. The default cache implementation stores pickled objects to a predictable path in /tmp. This can be used by a local attacker to redirect SOAP requests via symlinks or run a privilege escalation / code execution attack via a pickle exploit. cinder/requirements.txt:suds>=0.4 gantt/requirements.txt:suds>=0.4 nova/requirements.txt:suds>=0.4 oslo.vmware/requirements.txt:suds>=0.4 The details are available here - https://bugzilla.redhat.com/show_bug.cgi?id=978696 (CVE-2013-2217) Although this is an unlikely attack vector steps should be taken to prevent this behaviour. Potential ways to fix this are by explicitly setting the cache location to a directory created via tempfile.mkdtemp(), disabling cache client.set_options(cache=None), or using a custom cache implementation that doesn't load / store pickled objects from an insecure location. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1341954/+subscriptions From gerrit2 at review.openstack.org Thu Dec 18 07:55:54 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Thu, 18 Dec 2014 07:55:54 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I03b9c5c64f4bd8bca78dfc83199ef17d9a7ea5b7 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/130824 Log: commit 20b7e00b0e03e2d4fb852f8aac54b3f607737185 Author: abhishekkekane Date: Tue Oct 21 04:10:57 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I03b9c5c64f4bd8bca78dfc83199ef17d9a7ea5b7 From thierry.carrez+lp at gmail.com Thu Dec 18 10:34:23 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Thu, 18 Dec 2014 10:34:23 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141218103430.30804.35491.launchpad@chaenomeles.canonical.com> ** Changed in: cinder Status: Fix Committed => Fix Released ** Changed in: cinder Milestone: None => kilo-1 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Released Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From thierry.carrez+lp at gmail.com Thu Dec 18 13:51:45 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Thu, 18 Dec 2014 13:51:45 -0000 Subject: [Openstack-security] [Bug 1384626] Re: SSL certification verification failed when Heat calls Glanceclient with ca cert References: <20141023090711.3131.60470.malonedeb@wampee.canonical.com> Message-ID: <20141218135147.30626.98302.launchpad@chaenomeles.canonical.com> ** Changed in: heat Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1384626 Title: SSL certification verification failed when Heat calls Glanceclient with ca cert Status in Orchestration API (Heat): Fix Released Status in heat juno series: Fix Released Bug description: Glance server is configured Https. Configured Heat with heat.conf [clients_glance] ca_file= insecure= When trying to create stack, heat will raise exception during heat to load image data. [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed The root cause is that: ca_file as below is a wrong argument to initialize the glance client, it should be cacert which is supported arguments by glanceclient. class GlanceClientPlugin(client_plugin.ClientPlugin): exceptions_module = exc def _create(self): con = self.context endpoint_type = self._get_client_option('glance', 'endpoint_type') endpoint = self.url_for(service_type='image', endpoint_type=endpoint_type) args = { 'auth_url': con.auth_url, 'service_type': 'image', 'project_id': con.tenant, 'token': self.auth_token, 'endpoint_type': endpoint_type, 'ca_file': self._get_client_option('glance', 'ca_file'), 'cert_file': self._get_client_option('glance', 'cert_file'), 'key_file': self._get_client_option('glance', 'key_file'), 'insecure': self._get_client_option('glance', 'insecure') To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1384626/+subscriptions From thierry.carrez+lp at gmail.com Thu Dec 18 14:27:08 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Thu, 18 Dec 2014 14:27:08 -0000 Subject: [Openstack-security] [Bug 1382562] Re: security groups remote_group fails with CIDR in address pairs References: <20141017134212.4935.31773.malonedeb@wampee.canonical.com> Message-ID: <20141218142713.30560.40628.launchpad@chaenomeles.canonical.com> ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1382562 Title: security groups remote_group fails with CIDR in address pairs Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron juno series: Fix Released Status in OpenStack Security Advisories: Won't Fix Bug description: Add a CIDR to allowed address pairs of a host. RPC calls from the agents will run into this issue now when retrieving the security group members' IPs. I haven't confirmed because I came across this working on other code, but I think this may stop all members of the security groups referencing that group from getting their rules over the RPC channel. File "neutron/api/rpc/handlers/securitygroups_rpc.py", line 75, in security_group_info_for_devices return self.plugin.security_group_info_for_ports(context, ports) File "neutron/db/securitygroups_rpc_base.py", line 202, in security_group_info_for_ports return self._get_security_group_member_ips(context, sg_info) File "neutron/db/securitygroups_rpc_base.py", line 209, in _get_security_group_member_ips ethertype = 'IPv%d' % netaddr.IPAddress(ip).version File "/home/administrator/code/neutron/.tox/py27/local/lib/python2.7/site-packages/netaddr/ip/__init__.py", line 281, in __init__ % self.__class__.__name__) ValueError: IPAddress() does not support netmasks or subnet prefixes! See documentation for details. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1382562/+subscriptions From thierry.carrez+lp at gmail.com Thu Dec 18 16:20:02 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Thu, 18 Dec 2014 16:20:02 -0000 Subject: [Openstack-security] [Bug 1380642] Re: Horizon should not log token References: <20141013141256.17942.18062.malonedeb@chaenomeles.canonical.com> Message-ID: <20141218162005.22025.97083.launchpad@gac.canonical.com> ** Changed in: horizon Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1380642 Title: Horizon should not log token Status in OpenStack Dashboard (Horizon): Fix Released Bug description: It is Horizon version of bug 1327019. Various modules in openstack_dashboard/api logs token. In other modules, token value is not logged now and is output as *REDACTED* or some similar string. In Horizon case, these log lines are simply removed to fix the issue as it seems this logging is unnecessary in most cases. I don't think this needs to be private based on the discussion in bug 1327019. def novaclient(request): insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False) cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None) LOG.debug('novaclient connection created using token "%s" and url "%s"' % (request.user.token.id, base.url_for(request, 'compute'))) c = nova_client.Client(request.user.username, request.user.token.id, project_id=request.user.tenant_id, auth_url=base.url_for(request, 'compute'), insecure=insecure, cacert=cacert, http_log_debug=settings.DEBUG) c.client.auth_token = request.user.token.id c.client.management_url = base.url_for(request, 'compute') return c To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1380642/+subscriptions From thierry.carrez+lp at gmail.com Thu Dec 18 19:58:41 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Thu, 18 Dec 2014 19:58:41 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141218195846.21889.49430.launchpad@gac.canonical.com> ** Changed in: nova Status: Fix Committed => Fix Released ** Changed in: nova Milestone: None => kilo-1 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Released Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Fri Dec 19 07:47:01 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 19 Dec 2014 07:47:01 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change If492810a2f10fa5954f8c8bb708b14be0b77fb90 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/140304 Log: commit 2ee7551fd3899ec13067ea8828b6c58e1c8badfd Author: Stuart McLaren Date: Fri Sep 5 12:48:04 2014 +0000 Add client_socket_timeout option Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added client_socket_timeout option (default=0). SecurityImpact Conflicts: cinder/wsgi.py etc/cinder/cinder.conf.sample Note: This patch is not 1:1 cherry-pick, I have changed the default value of client_socket_timeout to 0, also renamed variable name in test_wsgi.py Change-Id: If492810a2f10fa5954f8c8bb708b14be0b77fb90 Closes-bug: #1361360 Closes-bug: #1371022 (cherry picked from commit 08bfa77aeccb8ca589e3fb5cf9771879818f59de) From gerrit2 at review.openstack.org Fri Dec 19 07:49:19 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 19 Dec 2014 07:49:19 +0000 Subject: [Openstack-security] [openstack/cinder] SecurityImpact review request change If492810a2f10fa5954f8c8bb708b14be0b77fb90 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/140304 Log: commit 336bd26eca6a80802e388de05d1f8ddd94f4c283 Author: Stuart McLaren Date: Fri Sep 5 12:48:04 2014 +0000 Add client_socket_timeout option Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added client_socket_timeout option (default=0). SecurityImpact Conflicts: cinder/wsgi.py etc/cinder/cinder.conf.sample Note: This patch is not 1:1 cherry-pick, I have changed the default value of client_socket_timeout to 0, also renamed variable name in test_wsgi.py Change-Id: If492810a2f10fa5954f8c8bb708b14be0b77fb90 Closes-bug: #1361360 Closes-bug: #1371022 (cherry picked from commit 08bfa77aeccb8ca589e3fb5cf9771879818f59de) From 1400443 at bugs.launchpad.net Tue Dec 9 16:48:01 2014 From: 1400443 at bugs.launchpad.net (Dolph Mathews) Date: Tue, 09 Dec 2014 16:48:01 -0000 Subject: [Openstack-security] [Bug 1400443] Re: Keystone should support pre-hashed passwords References: <20141208182330.19434.7359.malonedeb@soybean.canonical.com> Message-ID: <20141209164803.18558.82337.launchpad@soybean.canonical.com> ** Changed in: keystone Importance: Undecided => Wishlist -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400443 Title: Keystone should support pre-hashed passwords Status in OpenStack Identity (Keystone): Incomplete Status in OpenStack Security Advisories: Won't Fix Bug description: Passwords should be allowed to be pre-hashed upon user creation for better security. A user may want to store passwords in a script file, and it is much safer for these to be hashed beforehand so that the password is not in plaintext. This was implemented in keystone at one point https://git.openstack.org/cgit/openstack/keystone/commit/?id=e492bbc68ef41b276a0a18c6dbeda242d46b66f4 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1400443/+subscriptions From clu at us.ibm.com Tue Dec 9 21:02:50 2014 From: clu at us.ibm.com (Cindy Lu) Date: Tue, 09 Dec 2014 21:02:50 -0000 Subject: [Openstack-security] [Bug 1400872] Re: Show password feature should be configurable References: <20141209205525.24719.53334.malonedeb@wampee.canonical.com> Message-ID: <20141209210252.18221.63986.launchpad@gac.canonical.com> ** Changed in: horizon Assignee: (unassigned) => Cindy Lu (clu-m) -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400872 Title: Show password feature should be configurable Status in OpenStack Dashboard (Horizon): Confirmed Bug description: Horizon allows the password field to be displayed in plain text. This introduces a potential security risk. Imagine a user leaving their desktop unlock, if the user saved their password on the browser, a malicious user could go into the Login page and display the Openstack password. The show password feature should be made configurable for operators who wants a more secure deployment of Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1400872/+subscriptions From mark.brown at canonical.com Wed Dec 10 15:13:11 2014 From: mark.brown at canonical.com (Mark Brown) Date: Wed, 10 Dec 2014 15:13:11 -0000 Subject: [Openstack-security] [Bug 1372635] Re: MITM vulnerability with EMC VMAX driver References: <20140922201512.25143.9622.malonedeb@soybean.canonical.com> Message-ID: <20141210151311.16632.74209.malone@gac.canonical.com> Hmm. I may be delinquent in noting this here, that's on me; but an attempt to address this is being made in https://bugs.launchpad.net/ubuntu/+source/pywbem/+bug/1385469 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1372635 Title: MITM vulnerability with EMC VMAX driver Status in Cinder: In Progress Status in OpenStack Security Advisories: Won't Fix Bug description: The EMC VMAX driver in Juno appears to blindly trust whatever certificate it gets back from the device without any validation (it does not specify the ca_certs parameter, etc. on WBEMConnection.__init__). This would leave it open to a MITM attack. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1372635/+subscriptions From mark.brown at canonical.com Wed Dec 10 15:16:51 2014 From: mark.brown at canonical.com (Mark Brown) Date: Wed, 10 Dec 2014 15:16:51 -0000 Subject: [Openstack-security] [Bug 1372635] Re: MITM vulnerability with EMC VMAX driver References: <20140922201512.25143.9622.malonedeb@soybean.canonical.com> Message-ID: <20141210151651.24884.62513.malone@wampee.canonical.com> Closed that too early. I think the reason there is no bug agains pywbem is that this is already fixed in their dev branch, but not released yet as a stable version. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1372635 Title: MITM vulnerability with EMC VMAX driver Status in Cinder: In Progress Status in OpenStack Security Advisories: Won't Fix Bug description: The EMC VMAX driver in Juno appears to blindly trust whatever certificate it gets back from the device without any validation (it does not specify the ca_certs parameter, etc. on WBEMConnection.__init__). This would leave it open to a MITM attack. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1372635/+subscriptions From 1400872 at bugs.launchpad.net Wed Dec 10 23:03:45 2014 From: 1400872 at bugs.launchpad.net (OpenStack Infra) Date: Wed, 10 Dec 2014 23:03:45 -0000 Subject: [Openstack-security] [Bug 1400872] Re: Show password feature should be configurable References: <20141209205525.24719.53334.malonedeb@wampee.canonical.com> Message-ID: <20141210230345.24997.97275.malone@wampee.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/140862 ** Changed in: horizon Status: Confirmed => In Progress -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1400872 Title: Show password feature should be configurable Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Security Advisories: Won't Fix Bug description: Horizon allows the password field to be displayed in plain text. This introduces a potential security risk. Imagine a user leaving their desktop unlock, if the user saved their password on the browser, a malicious user could go into the Login page and display the Openstack password. The show password feature should be made configurable for operators who wants a more secure deployment of Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1400872/+subscriptions From thierry.carrez+lp at gmail.com Mon Dec 22 11:08:37 2014 From: thierry.carrez+lp at gmail.com (Thierry Carrez) Date: Mon, 22 Dec 2014 11:08:37 -0000 Subject: [Openstack-security] [Bug 1328488] Re: oslo apiclient logs sensitive data References: <20140610105701.26336.80011.malonedeb@wampee.canonical.com> Message-ID: <20141222110841.30339.991.launchpad@chaenomeles.canonical.com> ** Changed in: oslo-incubator Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1328488 Title: oslo apiclient logs sensitive data Status in The Oslo library incubator: Fix Released Status in OpenStack Security Advisories: Won't Fix Bug description: When trying to clean up the tempest logs in the gate, we leak passwords and keystone tokens everywhere. For instance, python- novaclient logs the auth token. What's more problematic though is that apiclient does the following: def _http_log_req(self, method, url, kwargs): if not self.debug: return string_parts = [ "curl -i", "-X '%s'" % method, "'%s'" % url, ] for element in kwargs['headers']: header = "-H '%s: %s'" % (element, kwargs['headers'][element]) string_parts.append(header) _logger.debug("REQ: %s" % " ".join(string_parts)) if 'data' in kwargs: _logger.debug("REQ BODY: %s\n" % (kwargs['data'])) The argument that it's at DEBUG level doesn't hold, because from the Atlanta operators summit it was clear that *all* operators are running their servers at DEBUG, because OpenStack is impossible to actually troubleshoot at any other logging level. And if you run neutron at debug level, then all your nova credentials are in your logs. This is coupled with the fact that a large amount of users are streaming all their logs directly into logstash. Which means they've now got a potentially public endpoint that lets them search for credentials. We need to stop doing that in a blanket way across OpenStack. To manage notifications about this bug go to: https://bugs.launchpad.net/oslo-incubator/+bug/1328488/+subscriptions From gerrit2 at review.openstack.org Mon Dec 22 14:00:59 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Mon, 22 Dec 2014 14:00:59 +0000 Subject: [Openstack-security] [openstack/nova] SecurityImpact review request change If3f88d8db4a726219573d0f1b65908408e3aa6a9 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/139672 Log: commit 5346e76d3cb3b63a994585a5d2c127dcb7144e9f Author: Matthew Gilliard Date: Fri Dec 5 16:14:52 2014 +0000 Adds ssl_overrides for glance client configuration We want to have a consistent way to apply ssl config to the various http(s) clients the nova creates. Following an ML discussion[1], this is approach has the glance client using the global options in CONF.ssl.* with optional local overrides. These are DictOpts in each client's config section, for example: [ssl] ca_file = /etc/ssl/ca_file [glance] ssl_overrides = {ca_file:/etc/ssl/glance_ca_file} The keys which can be overriden are: ca_file, key_file, cert_file. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-December/052175.html SecurityImpact: SSL config of Nova's Glance client DocImpact: New configuration option as described Change-Id: If3f88d8db4a726219573d0f1b65908408e3aa6a9 From jesse.pretorius at gmail.com Mon Dec 22 11:55:56 2014 From: jesse.pretorius at gmail.com (Jesse Pretorius) Date: Mon, 22 Dec 2014 11:55:56 -0000 Subject: [Openstack-security] [Bug 1404862] [NEW] Horizon SSL configuration vulnerable References: <20141222115556.21788.73486.malonedeb@gac.canonical.com> Message-ID: <20141222115556.21788.73486.malonedeb@gac.canonical.com> Public bug reported: Currently the Apache configuration for Horizon is very simple and therefore vulnerable to various forms of SSL and TLS attack vectors. The Qualys SSL test on the default setup results in a C grading. In order to ensure that best practices are implemented and anyone using os-ansible- deployment has a secure by default setup, this needs to be addressed. ** Affects: openstack-ansible Importance: Critical Assignee: Jesse Pretorius (jesse-pretorius) Status: In Progress ** Tags: juno-backport-potential security -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1404862 Title: Horizon SSL configuration vulnerable Status in Ansible playbooks for deploying OpenStack: In Progress Bug description: Currently the Apache configuration for Horizon is very simple and therefore vulnerable to various forms of SSL and TLS attack vectors. The Qualys SSL test on the default setup results in a C grading. In order to ensure that best practices are implemented and anyone using os-ansible-deployment has a secure by default setup, this needs to be addressed. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1404862/+subscriptions From 1404862 at bugs.launchpad.net Mon Dec 22 12:03:17 2014 From: 1404862 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 22 Dec 2014 12:03:17 -0000 Subject: [Openstack-security] [Bug 1404862] Fix proposed to os-ansible-deployment (master) References: <20141222115556.21788.73486.malonedeb@gac.canonical.com> Message-ID: <20141222120317.22142.80797.malone@gac.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/143430 -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1404862 Title: Horizon SSL configuration vulnerable Status in Ansible playbooks for deploying OpenStack: In Progress Bug description: Currently the Apache configuration for Horizon is very simple and therefore vulnerable to various forms of SSL and TLS attack vectors. The Qualys SSL test on the default setup results in a C grading. In order to ensure that best practices are implemented and anyone using os-ansible-deployment has a secure by default setup, this needs to be addressed. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1404862/+subscriptions From 1404862 at bugs.launchpad.net Mon Dec 22 17:09:52 2014 From: 1404862 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 22 Dec 2014 17:09:52 -0000 Subject: [Openstack-security] [Bug 1404862] Re: Horizon SSL configuration vulnerable References: <20141222115556.21788.73486.malonedeb@gac.canonical.com> Message-ID: <20141222170952.30981.52983.malone@chaenomeles.canonical.com> Reviewed: https://review.openstack.org/143430 Committed: https://git.openstack.org/cgit/stackforge/os-ansible-deployment/commit/?id=b11236a6e25585c49c6bdf7d15eb17542bca0c88 Submitter: Jenkins Branch: master commit b11236a6e25585c49c6bdf7d15eb17542bca0c88 Author: Jesse Pretorius Date: Mon Dec 22 12:01:14 2014 +0000 Improve Apache SSL configuration This patch implements changes in the SSL configuration to ensure that Horizon is not vulnerable to common SSL and TLS attack vectors. Change-Id: I2e24ea3b99c7caadfbc8992ac78648cfdc6c301d Closes-Bug: #1404862 ** Changed in: openstack-ansible Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1404862 Title: Horizon SSL configuration vulnerable Status in Ansible playbooks for deploying OpenStack: Fix Committed Bug description: Currently the Apache configuration for Horizon is very simple and therefore vulnerable to various forms of SSL and TLS attack vectors. The Qualys SSL test on the default setup results in a C grading. In order to ensure that best practices are implemented and anyone using os-ansible-deployment has a secure by default setup, this needs to be addressed. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-ansible/+bug/1404862/+subscriptions From gerrit2 at review.openstack.org Wed Dec 24 06:08:07 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 24 Dec 2014 06:08:07 +0000 Subject: [Openstack-security] [openstack/neutron] SecurityImpact review request change I3a361d6590d1800b85791f23ac1cdfd79815341b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/130834 Log: commit c30cd82c82a626d9c940abed29b882f4b21872bd Author: abhishekkekane Date: Tue Oct 21 04:15:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections. Hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Added a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I3a361d6590d1800b85791f23ac1cdfd79815341b From yangxurong at huawei.com Wed Dec 24 08:48:33 2014 From: yangxurong at huawei.com (Xurong Yang) Date: Wed, 24 Dec 2014 08:48:33 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141224084833.30528.93182.launchpad@chaenomeles.canonical.com> ** Also affects: heat Importance: Undecided Status: New ** Changed in: heat Assignee: (unassigned) => Xurong Yang (idopra) ** Also affects: sahara Importance: Undecided Status: New ** Changed in: sahara Assignee: (unassigned) => Xurong Yang (idopra) -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Released Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in Orchestration API (Heat): New Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Data Processing (Sahara, ex. Savanna): New Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From gerrit2 at review.openstack.org Fri Dec 26 03:07:50 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 26 Dec 2014 03:07:50 +0000 Subject: [Openstack-security] [openstack/heat] SecurityImpact review request change I303d87addeed8b103eeb26dbcc48e3acce06ee6a Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/144074 Log: commit bef9c2b89a11457d661238d6a7f5bd68c9f17cfa Author: yangxurong Date: Thu Dec 25 16:44:48 2014 +0800 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Add a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Change-Id: I303d87addeed8b103eeb26dbcc48e3acce06ee6a Closes-Bug: #1361360 From 1361360 at bugs.launchpad.net Fri Dec 26 03:07:46 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Fri, 26 Dec 2014 03:07:46 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141226030746.17018.28992.malone@soybean.canonical.com> Fix proposed to branch: master Review: https://review.openstack.org/144074 ** Changed in: heat Status: New => In Progress -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Released Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in Orchestration API (Heat): In Progress Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): In Progress Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Data Processing (Sahara, ex. Savanna): New Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From 1290486 at bugs.launchpad.net Fri Dec 26 06:48:41 2014 From: 1290486 at bugs.launchpad.net (James Polley) Date: Fri, 26 Dec 2014 06:48:41 -0000 Subject: [Openstack-security] [Bug 1290486] Re: neutron-openvswitch-agent does not recreate flows after ovsdb-server restarts References: <20140310180245.8017.61086.malonedeb@soybean.canonical.com> <20141225101932.22142.11448.malone@gac.canonical.com> Message-ID: It definitely solved my problem - but it seems to have caused problems for other people. There's a thread starting at http://lists.openstack.org/pipermail/openstack-dev/2014-October/049311.html - it continues into november and there's one last post from me in December. On Thu, Dec 25, 2014 at 11:19 AM, Hua Zhang wrote: > are you sure the patch https://review.openstack.org/#/c/101447/ can > resolve this question ? I still can hit this problem after confirming > my env has contained this patch. > > -- > You received this bug notification because you are a bug assignee. > https://bugs.launchpad.net/bugs/1290486 > > Title: > neutron-openvswitch-agent does not recreate flows after ovsdb-server > restarts > > Status in OpenStack Neutron (virtual network service): > Fix Released > Status in neutron icehouse series: > Fix Released > Status in tripleo - openstack on openstack: > Fix Released > > Bug description: > The DHCP requests were not being responded to after they were seen on > the undercloud network interface. The neutron services were restarted > in an attempt to ensure they had the newest configuration and knew > they were supposed to respond to the requests. > > Rather than using the heat stack create (called in > devtest_overcloud.sh) to test, it was simple to use the following to > directly boot a baremetal node. > > nova boot --flavor $(nova flavor-list | grep > "|[[:space:]]*baremetal[[:space:]]*|" | awk '{print $2}) \ > --image $(nova image-list | grep > "|[[:space:]]*overcloud-control[[:space:]]*|" | awk '{print $2}') \ > bm-test1 > > Whilst the baremetal node was attempting to pxe boot a restart of the > neutron services was performed. This allowed the baremetal node to > boot. > > It has been observed that a neutron restart was needed for each > subsequent reboot of the baremetal nodes to succeed. > > To manage notifications about this bug go to: > https://bugs.launchpad.net/neutron/+bug/1290486/+subscriptions > -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1290486 Title: neutron-openvswitch-agent does not recreate flows after ovsdb-server restarts Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron icehouse series: Fix Released Status in tripleo - openstack on openstack: Fix Released Bug description: The DHCP requests were not being responded to after they were seen on the undercloud network interface. The neutron services were restarted in an attempt to ensure they had the newest configuration and knew they were supposed to respond to the requests. Rather than using the heat stack create (called in devtest_overcloud.sh) to test, it was simple to use the following to directly boot a baremetal node. nova boot --flavor $(nova flavor-list | grep "|[[:space:]]*baremetal[[:space:]]*|" | awk '{print $2}) \ --image $(nova image-list | grep "|[[:space:]]*overcloud-control[[:space:]]*|" | awk '{print $2}') \ bm-test1 Whilst the baremetal node was attempting to pxe boot a restart of the neutron services was performed. This allowed the baremetal node to boot. It has been observed that a neutron restart was needed for each subsequent reboot of the baremetal nodes to succeed. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1290486/+subscriptions From gerrit2 at review.openstack.org Fri Dec 26 18:01:49 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Fri, 26 Dec 2014 18:01:49 +0000 Subject: [Openstack-security] [openstack/neutron] SecurityImpact review request change I3a361d6590d1800b85791f23ac1cdfd79815341b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/130834 Log: commit 9e88df03675aa39a55e06283bbc15cbe546ca0ee Author: abhishekkekane Date: Tue Oct 21 04:15:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections. Hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Added a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I3a361d6590d1800b85791f23ac1cdfd79815341b From gerrit2 at review.openstack.org Sat Dec 27 19:17:58 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Sat, 27 Dec 2014 19:17:58 +0000 Subject: [Openstack-security] [openstack/neutron] SecurityImpact review request change I3a361d6590d1800b85791f23ac1cdfd79815341b Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/130834 Log: commit 8e7a0dbb12082f6159d98a4628fb8a6fcd05e95a Author: abhishekkekane Date: Tue Oct 21 04:15:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections. Hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Added a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I3a361d6590d1800b85791f23ac1cdfd79815341b From 1118066 at bugs.launchpad.net Mon Dec 29 17:23:58 2014 From: 1118066 at bugs.launchpad.net (OpenStack Infra) Date: Mon, 29 Dec 2014 17:23:58 -0000 Subject: [Openstack-security] [Bug 1118066] Re: Nova should confirm quota requests against Keystone References: <20130207064604.19234.83660.malonedeb@gac.canonical.com> Message-ID: <20141229172400.22410.85804.launchpad@gac.canonical.com> ** Changed in: nova Status: Confirmed => In Progress -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1118066 Title: Nova should confirm quota requests against Keystone Status in OpenStack Compute (Nova): In Progress Bug description: os-quota-sets API should check requests for /v2/:tenant/os-quota-sets/ against Keystone to ensure that :tenant does exist. POST requests to a non-existant tenant should fail with a 400 error code. GET requests to a non-existant tenant may fail with a 400 error code. Current behavior is to return 200 with the default quotas. A slightly incompatible change would be to return a 302 redirect to /v2/:tenant /os-quota-sets/defaults in this case. Edit (2014-01-22) Original Description -------------------- GET /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist returns 200 with the default quotas. Moreover POST /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist with updated quotas succeeds and that metadata is saved! I'm not sure if this is a bug or not. I cannot find any documentation on this interface. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1118066/+subscriptions From 1361360 at bugs.launchpad.net Tue Dec 30 08:28:57 2014 From: 1361360 at bugs.launchpad.net (OpenStack Infra) Date: Tue, 30 Dec 2014 08:28:57 -0000 Subject: [Openstack-security] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests References: <20140825203231.13086.48412.malonedeb@wampee.canonical.com> Message-ID: <20141230082857.21757.8766.malone@gac.canonical.com> Reviewed: https://review.openstack.org/130834 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=8e7a0dbb12082f6159d98a4628fb8a6fcd05e95a Submitter: Jenkins Branch: master commit 8e7a0dbb12082f6159d98a4628fb8a6fcd05e95a Author: abhishekkekane Date: Tue Oct 21 04:15:15 2014 -0700 Eventlet green threads not released back to pool Presently, the wsgi server allows persist connections. Hence even after the response is sent to the client, it doesn't close the client socket connection. Because of this problem, the green thread is not released back to the pool. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Added a parameter to take advantage of the new(ish) eventlet socket timeout behaviour. Allows closing idle client connections after a period of time, eg: $ time nc localhost 8776 real 1m0.063s Setting 'client_socket_timeout = 0' means do not timeout. DocImpact: Added wsgi_keep_alive option (default=True). Added client_socket_timeout option (default=900). SecurityImpact Closes-Bug: #1361360 Change-Id: I3a361d6590d1800b85791f23ac1cdfd79815341b ** Changed in: neutron Status: In Progress => Fix Committed -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1361360 Title: Eventlet green threads not released back to the pool leading to choking of new requests Status in Cinder: Fix Released Status in Cinder icehouse series: Fix Committed Status in Cinder juno series: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in Glance icehouse series: New Status in Orchestration API (Heat): In Progress Status in OpenStack Identity (Keystone): In Progress Status in Keystone icehouse series: Confirmed Status in Keystone juno series: Fix Committed Status in OpenStack Neutron (virtual network service): Fix Committed Status in neutron icehouse series: New Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) icehouse series: New Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Data Processing (Sahara, ex. Savanna): New Bug description: Currently reproduced on Juno milestone 2. but this issue should be reproducible in all releases since its inception. It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack. Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet) Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py. After you run the below program, you should try to invoke API ============================================================================================ import time import requests from multiprocessing import Process def request(number): #Port is important here path = 'http://127.0.0.1:8774/servers' try: response = requests.get(path) print "RESPONSE %s-%d" % (response.status_code, number) #during this sleep time, check if the client socket connection is released or not on the API controller node. time.sleep(1000) print “Thread %d complete" % number except requests.exceptions.RequestException as ex: print “Exception occurred %d-%s" % (number, str(ex)) if __name__ == '__main__': processes = [] for number in range(40): p = Process(target=request, args=(number,)) p.start() processes.append(p) for p in processes: p.join() ================================================================================================ Presently, the wsgi server allows persist connections if you configure keepalive to True which is default. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set keepalive to False when you create a wsgi server. Additional information: By default eventlet passes “Connection: keepalive” if keepalive is set to True when a response is sent to the client. But it doesn’t have capability to set the timeout and max parameter. For example. Keep-Alive: timeout=10, max=5 Note: After we have disabled keepalive in all the OpenStack API service using wsgi library, then it might impact all existing applications built with the assumptions that OpenStack API services uses persistent connections. They might need to modify their applications if reconnection logic is not in place and also they might experience the performance has slowed down as it will need to reestablish the http connection for every request. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions From robert.clark at hp.com Tue Dec 30 19:17:38 2014 From: robert.clark at hp.com (Clark, Robert Graham) Date: Tue, 30 Dec 2014 19:17:38 +0000 Subject: [Openstack-security] OpenStack Security Group Mid-Cycle Message-ID: All, I hope you all enjoyed the holidays and are ready to hit the ground running in the new year, there’s a lot of security work to do! As you will know if you’ve been attending the IRC meetings it’s been very difficult to arrange the mid-cycle in a way that allows us to seamlessly join with the Barbican mid-cycle – this was desirable because we have so much overlap both in terms of direction and expertise. Alas, we weren’t able to find dates that worked because of dates/budgets and various other constraints. Instead we will run the mid-cycles in parallel. The OSSG mid-cycle will be run in San Francisco while the Barbican team meet up in San Antonio. Its our intention to enable direct collaboration during the week and there are obvious intersections between our projects where we can get a lot done. The intended location for the mid-cycle is the Geekdom location in San Francisco, Travis visited them recently and found the space perfect for our needs – thanks Travis! Special thanks also go to Douglas and his parent organisation Rackspace for their help arranging the space! http://geekdomsf.com/ The proposed dates are Tuesday, February 17th -> Friday, February 20th. I’m awaiting final confirmation that we have the space but I don’t expect there to be any problems. https://wiki.openstack.org/wiki/Sprints/OSSGKiloSprint Now is the time to take a look at the etherpad and add any ideas or proposals for work to undertake during the mid-cycle, I know a number of you have expressed ideas already, lets get them documented all in one place! https://etherpad.openstack.org/p/ossg-kilo-meetup Cheers -Rob From gerrit2 at review.openstack.org Wed Dec 31 15:31:48 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 31 Dec 2014 15:31:48 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I241ca72329f1ec9df778498b346d7b29c224d528 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/117366 Log: commit 386cf6e647cdc44dedca3431630ac0e66aecb91e Author: Brant Knudson Date: Wed Aug 27 17:06:44 2014 -0500 pki/ssl_setup configurable digest The digest to use for pki_setup couldn't be configured. The value was `default`, which means that the digest was sha1. Some security standards require the digest to be stronger (SHA2), so making the digest configurable will allow deployments to be compliant. SecurityImpact DocImpact New `message_digest_algorithm` configuration options are added to the [signing] and [ssl] sections which default to `default`. Change-Id: I241ca72329f1ec9df778498b346d7b29c224d528 Closes-Bug: #1362343 From gerrit2 at review.openstack.org Wed Dec 31 15:41:50 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 31 Dec 2014 15:41:50 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I241ca72329f1ec9df778498b346d7b29c224d528 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/117366 Log: commit 5742079a7fe1b8c577bb50fd53e4e3373d610e41 Author: Brant Knudson Date: Wed Aug 27 17:06:44 2014 -0500 pki/ssl_setup configurable digest The digest to use for pki_setup couldn't be configured. The value was `default`, which means that the digest was sha1. Some security standards require the digest to be stronger (SHA2), so making the digest configurable will allow deployments to be compliant. SecurityImpact DocImpact New `message_digest_algorithm` configuration options are added to the [signing] and [ssl] sections which default to `default`. Change-Id: I241ca72329f1ec9df778498b346d7b29c224d528 Closes-Bug: #1362343 From gerrit2 at review.openstack.org Wed Dec 31 15:58:55 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 31 Dec 2014 15:58:55 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I241ca72329f1ec9df778498b346d7b29c224d528 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/117366 Log: commit ecdb135cbd0296ca21f5bdb1f8bd1f086696433d Author: Brant Knudson Date: Wed Aug 27 17:06:44 2014 -0500 pki/ssl_setup configurable digest The digest to use for pki_setup couldn't be configured. The value was `default`, which means that the digest was sha1. Some security standards require the digest to be stronger (SHA2), so making the digest configurable will allow deployments to be compliant. SecurityImpact DocImpact New `message_digest_algorithm` configuration options are added to the [signing] and [ssl] sections which default to `default`. Change-Id: I241ca72329f1ec9df778498b346d7b29c224d528 Closes-Bug: #1362343 From gerrit2 at review.openstack.org Wed Dec 31 16:23:19 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 31 Dec 2014 16:23:19 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I241ca72329f1ec9df778498b346d7b29c224d528 Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/117366 Log: commit 6929e3fe15f35376f32bc890661373b7a66685cd Author: Brant Knudson Date: Wed Aug 27 17:06:44 2014 -0500 pki/ssl_setup configurable digest The digest to use for pki_setup couldn't be configured. The value was `default`, which on some systems means that the digest was sha1. Some security standards require the digest to be stronger (SHA2), so making the digest configurable will allow deployments to be compliant. SecurityImpact DocImpact New `message_digest_algorithm` configuration options are added to the [signing] and [ssl] sections which default to `default`. Change-Id: I241ca72329f1ec9df778498b346d7b29c224d528 Closes-Bug: #1362343 From gerrit2 at review.openstack.org Wed Dec 31 16:23:25 2014 From: gerrit2 at review.openstack.org (gerrit2 at review.openstack.org) Date: Wed, 31 Dec 2014 16:23:25 +0000 Subject: [Openstack-security] [openstack/keystone] SecurityImpact review request change I9e42c9bafc307ba1334fa641bab76f251722044d Message-ID: Hi, I'd like you to take a look at this patch for potential SecurityImpact. https://review.openstack.org/117367 Log: commit 7bb5b544fc6ab33d1ab4c1b140bccccf797c96d2 Author: Brant Knudson Date: Wed Aug 27 17:11:06 2014 -0500 Change the default digest for pki/ssl_setup to sha256 The default digest was `default`, which meant that the digest was the openssl default which may be sha1 or sha256 or better. Keystone will now set the default digest to sha256, which conforms to most security policies. This is for security hardening. SecurityImpact DocImpact The `default_message_digest` configuration options now default to `sha256` instead of `default`. Change-Id: I9e42c9bafc307ba1334fa641bab76f251722044d Related-Bug: #1362343 From joshua.zhang at canonical.com Thu Dec 25 10:19:32 2014 From: joshua.zhang at canonical.com (Hua Zhang) Date: Thu, 25 Dec 2014 10:19:32 -0000 Subject: [Openstack-security] [Bug 1290486] Re: neutron-openvswitch-agent does not recreate flows after ovsdb-server restarts References: <20140310180245.8017.61086.malonedeb@soybean.canonical.com> Message-ID: <20141225101932.22142.11448.malone@gac.canonical.com> are you sure the patch https://review.openstack.org/#/c/101447/ can resolve this question ? I still can hit this problem after confirming my env has contained this patch. -- You received this bug notification because you are a member of OpenStack Security Group, which is subscribed to OpenStack. https://bugs.launchpad.net/bugs/1290486 Title: neutron-openvswitch-agent does not recreate flows after ovsdb-server restarts Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron icehouse series: Fix Released Status in tripleo - openstack on openstack: Fix Released Bug description: The DHCP requests were not being responded to after they were seen on the undercloud network interface. The neutron services were restarted in an attempt to ensure they had the newest configuration and knew they were supposed to respond to the requests. Rather than using the heat stack create (called in devtest_overcloud.sh) to test, it was simple to use the following to directly boot a baremetal node. nova boot --flavor $(nova flavor-list | grep "|[[:space:]]*baremetal[[:space:]]*|" | awk '{print $2}) \ --image $(nova image-list | grep "|[[:space:]]*overcloud-control[[:space:]]*|" | awk '{print $2}') \ bm-test1 Whilst the baremetal node was attempting to pxe boot a restart of the neutron services was performed. This allowed the baremetal node to boot. It has been observed that a neutron restart was needed for each subsequent reboot of the baremetal nodes to succeed. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1290486/+subscriptions