[Openstack-security] [Bug 1576765] Re: Potential DOS: Keystone Extra Fields

Tristan Cacqueray tdecacqu at redhat.com
Mon May 9 15:21:17 UTC 2016


Based on above comments, I've switch that bug to public and removed the
OSSA task.

** Information type changed from Private Security to Public

** Description changed:

- This issue is being treated as a potential security risk under embargo.
- Please do not make any public mention of embargoed (private) security
- vulnerabilities before their coordinated publication by the OpenStack
- Vulnerability Management Team in the form of an official OpenStack
- Security Advisory. This includes discussion of the bug or associated
- fixes in public forums such as mailing lists, code review systems and
- bug trackers. Please also avoid private disclosure to other individuals
- not already approved for access to this information, and provide this
- same reminder to those who are made aware of the issue prior to
- publication. All discussion should remain confined to this private bug
- report, and any proposed fixes should be added to the bug as
- attachments.
- 
- --
- 
  A user that has rights to update a resource in Keystone (project, user,
  domain, etc) can inject extra data (near unlimited amounts) with data
  that is only limited by the maximum request size. The extra fields
  cannot be deleted (ever) in the current design (the value of the field
  can be set to ~1byte minimum). An update excluding the field leaves the
  field data intact as is.
  
  This means that a bad actor can update a keystone resource and do one of
  the following to DOS Keystone cluster, database replication, database
  traffic, etc:
  
  1) Create endless numbers of fields with very little data, that will
  cause longer and longer json serialization/deserailization times due to
  the volume of elements.
  
  2) Create endless numbers of fields with large data sets, increasing the
  delta of what is stored in the RDBMS and putting extra load on the
  replication/etc processes for the shared data. This potentially could be
  used as a vector to run the DB server out of ram/cache/buffers/disk.
  This also causes the issue itemized above (1).
  
  3) With HMT, it is possible to duplicate (as a domain/user) the above
  listed items with more and more resources.
  
  Memcache/caching will offset some of these issues until the memcache
  server can no longer store the data from the keystone resource due to
  exceeding the slab size (1MB) which could cause excessive load on the
  memcached servers/caching servers.
  
  With caching enabled, it is possible to run the keystone processes out
  of memory/DOS due to the request_local cache in use to ensure that the
  resources are fetched from the backend a single time (using a msgpack of
  the data stored in memory) for a given HTTP request.
  
  --- PROPOSED FIX --
  * Issue a security bug fix that by default disables the ability to store data in the extra fields for *ALL* keystone resources
  * Migrate any/all fields that keystone supports to first class-attributes (columns) in the SQL backend[s].
  * 2-Cycle deprecation before removal of the support for "extra" field storage (toggled via config value) - in the P Cycle extra fields will no longer be supported. All non-standard data will need to be migrated to an external metadata storage.

** Changed in: ossa
       Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of OpenStack
Security, which is subscribed to OpenStack.
https://bugs.launchpad.net/bugs/1576765

Title:
  Potential DOS: Keystone Extra Fields

Status in OpenStack Identity (keystone):
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  A user that has rights to update a resource in Keystone (project,
  user, domain, etc) can inject extra data (near unlimited amounts) with
  data that is only limited by the maximum request size. The extra
  fields cannot be deleted (ever) in the current design (the value of
  the field can be set to ~1byte minimum). An update excluding the field
  leaves the field data intact as is.

  This means that a bad actor can update a keystone resource and do one
  of the following to DOS Keystone cluster, database replication,
  database traffic, etc:

  1) Create endless numbers of fields with very little data, that will
  cause longer and longer json serialization/deserailization times due
  to the volume of elements.

  2) Create endless numbers of fields with large data sets, increasing
  the delta of what is stored in the RDBMS and putting extra load on the
  replication/etc processes for the shared data. This potentially could
  be used as a vector to run the DB server out of
  ram/cache/buffers/disk. This also causes the issue itemized above (1).

  3) With HMT, it is possible to duplicate (as a domain/user) the above
  listed items with more and more resources.

  Memcache/caching will offset some of these issues until the memcache
  server can no longer store the data from the keystone resource due to
  exceeding the slab size (1MB) which could cause excessive load on the
  memcached servers/caching servers.

  With caching enabled, it is possible to run the keystone processes out
  of memory/DOS due to the request_local cache in use to ensure that the
  resources are fetched from the backend a single time (using a msgpack
  of the data stored in memory) for a given HTTP request.

  --- PROPOSED FIX --
  * Issue a security bug fix that by default disables the ability to store data in the extra fields for *ALL* keystone resources
  * Migrate any/all fields that keystone supports to first class-attributes (columns) in the SQL backend[s].
  * 2-Cycle deprecation before removal of the support for "extra" field storage (toggled via config value) - in the P Cycle extra fields will no longer be supported. All non-standard data will need to be migrated to an external metadata storage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1576765/+subscriptions




More information about the Openstack-security mailing list