[Openstack-security] [Bug 1790706] Re: Additional metadata service endpoints on OpenStack accessible

Jeremy Stanley fungi at yuggoth.org
Fri Feb 28 14:07:42 UTC 2020


As this was fixed by the time Stein released, and stable/rocky is
entering extended maintenance now, it probably makes no sense to try and
issue an advisory (though anyone is welcome to submit a backport to
branches under extended maintenance there will be no further point
releases tagged for them).

** Description changed:

- This issue is being treated as a potential security risk under
- embargo. Please do not make any public mention of embargoed
- (private) security vulnerabilities before their coordinated
- publication by the OpenStack Vulnerability Management Team in the
- form of an official OpenStack Security Advisory. This includes
- discussion of the bug or associated fixes in public forums such as
- mailing lists, code review systems and bug trackers. Please also
- avoid private disclosure to other individuals not already approved
- for access to this information, and provide this same reminder to
- those who are made aware of the issue prior to publication. All
- discussion should remain confined to this private bug report, and
- any proposed fixes should be added to the bug as attachments. This
- embargo shall not extend past 2020-05-27 and will be made
- public by or on that date if no fix is identified.
- 
  Note: I'm reporting this on behalf of our partner SAP. While the bug is
  about Newton, one of our neutron developers believes this may still be
  valid for newer versions: "The bug might be still valid in upstream,
  since there are no specific case where they are filtering based on the
  IP 169.254.169.254, since they are passing the same port as such."
  
  # Setup:
  OpenStack Newton with `force_metadata = true` on all network nodes
  Kubernetes Gardener setup (seed+shoot) on OpenStack
  
  # Detail description from the hacker simulation:
  
  By running a nmap -sn … scan (ping scan) we discovered several
  endpoints in the shoot network (apart from the nodes that can be seen
  from `kubectl --kubeconfig myshoot.kubeconfig get nodes`). We noticed
  that some of the endpoints also serve meta and user data on port 80,
  i.e. the metadata service is not only available from the well-known
  metadata service IP (http://169.254.169.254/…,
  https://docs.openstack.org/nova/latest/user/metadata-service.html) but
  also from those other addresses. In our test the endpoints were
  10.250.0.2-7.
  We learned that the
  endpoints probably are the OpenStack DHCP nodes, i.e. every OpenStack
  DHCP endpoint appears to also serve the metadata.
  While the accessibility of the metadata service is a known problem,
  this situation is “worse” (compared to e.g. Gardener seed and shoot
  clusters on AWS) for the following reasons:
  1. If a network policy is applied to block access from cluster payloads
  to the metadata service, it’s not enough to block well-known
  `169.254.169.254` but it must also block all access to all other
  existing endpoints. How can the definite set of endpoints be
  determined? Are they guaranteed to not change during the lifetime of a
  cluster?
  2. If the metadata service is only accessible via 169.254.169.254, the
  known `kubectl proxy` issue (details can be shared if needed)
  cannot be used to get access to the metadata service, as the
  link-local 169.254.0.0/16 address range is not allowed by the Kubernetes API server
  as an endpoint address. But for example 10.250… is allowed, i.e. a shoot user on
  OpenStack can use the attack to get access to the metadata service in
  the seed network.
  The fact that no fix is in sight for the `kubectl proxy` issue and it
  might not be patchable poses an additional risk regarding 2. We will
  try to follow up on that with the Kubernetes security team once again.
  
  # Detail information:
  Due to the “force_metadata” setting the DHCP namespaces are exposing the metadata service:
  
  # ip netns exec qdhcp-54ad9fe0-2ce5-4083-a32b-ca744e806d1f netstat -tulpen
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      0          1934832519 54198/python
  tcp        0      0 10.222.0.3:53           0.0.0.0:*               LISTEN      0          1934920972 54135/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      0          1934920970 54135/dnsmasq
  tcp        0      0 fe80::f816:3eff:fe01:53 :::*                    LISTEN      198        1934909191 54135/dnsmasq
  udp        0      0 10.222.0.3:53           0.0.0.0:*                           0          1934920971 54135/dnsmasq
  udp        0      0 169.254.169.254:53      0.0.0.0:*                           0          1934920969 54135/dnsmasq
  udp        0      0 0.0.0.0:67              0.0.0.0:*                           0          1934920966 54135/dnsmasq
  udp        0      0 fe80::f816:3eff:fe01:53 :::*                                198        1934909190 54135/dnsmasq
  
  The problem is that the metadata proxy is listening to 0.0.0.0:80 instead of 169.254.169.254:80.
  This let the metadata service respond also to DHCP ip addresses which cannot be blocked easily.
  
  This fix mitigated the problem:
  --- neutron.org/agent/metadata/namespace_proxy.py       2018-08-31 12:42:25.901681939 +0000
  +++ neutron/agent/metadata/namespace_proxy.py   2018-08-31 12:43:17.541826180 +0000
  @@ -130,7 +130,7 @@
               self.router_id)
           proxy = wsgi.Server('neutron-network-metadata-proxy',
                               num_threads=self.proxy_threads)
  -        proxy.start(handler, self.port)
  +        proxy.start(handler, self.port, '169.254.169.254')
  
           # Drop privileges after port bind
           super(ProxyDaemon, self).run()

** Changed in: ossa
       Status: Incomplete => Won't Fix

** Information type changed from Private Security to Public

** Tags added: security

-- 
You received this bug notification because you are a member of OpenStack
Security SIG, which is subscribed to OpenStack.
https://bugs.launchpad.net/bugs/1790706

Title:
  Additional metadata service endpoints on OpenStack accessible

Status in neutron:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Note: I'm reporting this on behalf of our partner SAP. While the bug
  is about Newton, one of our neutron developers believes this may still
  be valid for newer versions: "The bug might be still valid in
  upstream, since there are no specific case where they are filtering
  based on the IP 169.254.169.254, since they are passing the same port
  as such."

  # Setup:
  OpenStack Newton with `force_metadata = true` on all network nodes
  Kubernetes Gardener setup (seed+shoot) on OpenStack

  # Detail description from the hacker simulation:

  By running a nmap -sn … scan (ping scan) we discovered several
  endpoints in the shoot network (apart from the nodes that can be seen
  from `kubectl --kubeconfig myshoot.kubeconfig get nodes`). We noticed
  that some of the endpoints also serve meta and user data on port 80,
  i.e. the metadata service is not only available from the well-known
  metadata service IP (http://169.254.169.254/…,
  https://docs.openstack.org/nova/latest/user/metadata-service.html) but
  also from those other addresses. In our test the endpoints were
  10.250.0.2-7.
  We learned that the
  endpoints probably are the OpenStack DHCP nodes, i.e. every OpenStack
  DHCP endpoint appears to also serve the metadata.
  While the accessibility of the metadata service is a known problem,
  this situation is “worse” (compared to e.g. Gardener seed and shoot
  clusters on AWS) for the following reasons:
  1. If a network policy is applied to block access from cluster payloads
  to the metadata service, it’s not enough to block well-known
  `169.254.169.254` but it must also block all access to all other
  existing endpoints. How can the definite set of endpoints be
  determined? Are they guaranteed to not change during the lifetime of a
  cluster?
  2. If the metadata service is only accessible via 169.254.169.254, the
  known `kubectl proxy` issue (details can be shared if needed)
  cannot be used to get access to the metadata service, as the
  link-local 169.254.0.0/16 address range is not allowed by the Kubernetes API server
  as an endpoint address. But for example 10.250… is allowed, i.e. a shoot user on
  OpenStack can use the attack to get access to the metadata service in
  the seed network.
  The fact that no fix is in sight for the `kubectl proxy` issue and it
  might not be patchable poses an additional risk regarding 2. We will
  try to follow up on that with the Kubernetes security team once again.

  # Detail information:
  Due to the “force_metadata” setting the DHCP namespaces are exposing the metadata service:

  # ip netns exec qdhcp-54ad9fe0-2ce5-4083-a32b-ca744e806d1f netstat -tulpen
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      0          1934832519 54198/python
  tcp        0      0 10.222.0.3:53           0.0.0.0:*               LISTEN      0          1934920972 54135/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      0          1934920970 54135/dnsmasq
  tcp        0      0 fe80::f816:3eff:fe01:53 :::*                    LISTEN      198        1934909191 54135/dnsmasq
  udp        0      0 10.222.0.3:53           0.0.0.0:*                           0          1934920971 54135/dnsmasq
  udp        0      0 169.254.169.254:53      0.0.0.0:*                           0          1934920969 54135/dnsmasq
  udp        0      0 0.0.0.0:67              0.0.0.0:*                           0          1934920966 54135/dnsmasq
  udp        0      0 fe80::f816:3eff:fe01:53 :::*                                198        1934909190 54135/dnsmasq

  The problem is that the metadata proxy is listening to 0.0.0.0:80 instead of 169.254.169.254:80.
  This let the metadata service respond also to DHCP ip addresses which cannot be blocked easily.

  This fix mitigated the problem:
  --- neutron.org/agent/metadata/namespace_proxy.py       2018-08-31 12:42:25.901681939 +0000
  +++ neutron/agent/metadata/namespace_proxy.py   2018-08-31 12:43:17.541826180 +0000
  @@ -130,7 +130,7 @@
               self.router_id)
           proxy = wsgi.Server('neutron-network-metadata-proxy',
                               num_threads=self.proxy_threads)
  -        proxy.start(handler, self.port)
  +        proxy.start(handler, self.port, '169.254.169.254')

           # Drop privileges after port bind
           super(ProxyDaemon, self).run()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1790706/+subscriptions



More information about the Openstack-security mailing list