[openstack-dev] FW: [Keystone][Folsom] Token re-use

Kant, Arun arun.kant at hp.com
Fri Jun 21 19:04:08 UTC 2013



From: Adam Young [mailto:ayoung at redhat.com]
Sent: Thursday, June 20, 2013 6:30 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

On 06/20/2013 04:50 PM, Ali, Haneef wrote:

1)      I'm really not sure how that will solve the original issue (Token table size increase).  Of course we can have a job to remove the expired token.
It is not expiry that is the issue, but revocation.  Expirey is handled by the fact that the token is a signed document with a timestamp in it.  We don't really need to store expired tokens at all.
[Arun] One of the issue is unlimited number of active tokens possible through keystone for same credentials. Which can possibly be turned into DoS attack on cloud services. So we can possibly look into keystone for solution as token generation is one of its key responsibility. Removal of expired tokens is separate aspect which will be needed after some point regardless of token is re-used or not.



2)      We really have to think how the other services are using keystone.  Keystone "createToken" volume is going to increase. Fixing one issue going to create another one.
Yes it will.  But in the past, the load was on Keystone token validate, and PKI has removed that load.  Right now, the greater load on Keystone is coming from token create, but that is because token caching is not in place.  With proper caching, Keystone would be hit only once for most workloads.  It is currently hit for every Remote call.  It is not the token generation that is the issue, but the issuing of the tokens that needs to be throttled back.
[Arun] We cannot just have solution for  happy path/situations. Being available in cloud , there are going to be varying types of clients and cannot just expect that each of them will have caching or will always be able to work with PKI token format (like third party services/applications running *on the cloud*). Throttling of token issuance request will require complex rate-limiting logic because of various input combinations and business rules associated with it. There can be another solution where keystone re-uses active token based on some selection logic and still able to server auth request without rate limiting errors.



1.       If I  understood correctly  swift is using memcache to increase the  validateToken performance.  What will happen to it?  Obviously load  to  "validateToken" will also increase.
Validate token happens in process with PKI tokens, not via remote call. Memcache just prevents swift from having to make that check more than once per token.  Revocation still needs to be checked every time.
[Arun] There are issues with PKI token approach as well (lifespan of token, data size limit, role and status changes after token generation). If shorter timespan is used, then essentially we will be increasing createToken requests.




2.      In few cases I have seen VM creation taking more than 5 min.  ( download image from glance and create vm).   Short lived token ( 5 min) will be a real fun  in this case.
That is what trusts are for.  Nova should not be using a bearer token to perform operations on behalf of the user.  Nova should be getting a delegated token via a trust to perform those operations.  If a vm takes 5 minutes, it should not matter if the tokens time out, as Nova will get a token when it needs it. Bearer tokens are  a poor design approach, and we have work going on that will remedy that.
[Arun] Not sure how delegated token or current v3 trust/ role model is going to work here as token needs to have user roles (or at least delegated permissions with user's *all* privilege) to work on user behalf. Are we talking about impersonating user by Nova application in some way?
In short-lived (non-PKI format), we are just diverting request load from validate token to create token which is relatively expensive operation.

We need some smarter mechanism to limit proliferation of tokens as they are essentially user's credentials for a limited time.



Thanks
Haneef



From: Ravi Chunduru [mailto:ravivsn at gmail.com]
Sent: Thursday, June 20, 2013 11:49 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

+1
On Thu, Jun 20, 2013 at 11:37 AM, Dolph Mathews <dolph.mathews at gmail.com<mailto:dolph.mathews at gmail.com>> wrote:

On Wed, Jun 19, 2013 at 2:20 PM, Adam Young <ayoung at redhat.com<mailto:ayoung at redhat.com>> wrote:
I really want to go the other way on this:  I want token to be very short lived, ideally something like 1 minute, but probably 5 minutes to account for clock skew.  I want to get rid of token revocation list checking.  I'd like to get away from revocation altogether:  tokens are not stored in the backend.  If they are ephemeral, we can just check that the token has a valid signature and that the time has not expired.

+10







On 06/19/2013 12:59 PM, Ravi Chunduru wrote:
Thats still an open item in this thread.

Let me summarize once again

1) Use case for keystone not to re-issue same token for same credentials
2) Ratelimit cons and service unavailability
3) Further information on python keyring if not going by keystone re-issue of the tokens.
On Wed, Jun 19, 2013 at 9:16 AM, Yee, Guang <guang.yee at hp.com<mailto:guang.yee at hp.com>> wrote:
Just out of curiosity, is there really a use case where user need to request multiple tokens of the same scope, where the only difference are the expiration dates?


Guang


From: Dolph Mathews [mailto:dolph.mathews at gmail.com<mailto:dolph.mathews at gmail.com>]
Sent: Wednesday, June 19, 2013 7:27 AM

To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use


On Wed, Jun 19, 2013 at 1:42 AM, Ali, Haneef <haneef.ali at hp.com<mailto:haneef.ali at hp.com>> wrote:

1)      Token Caching is not always going to help. It depends on the application.    E.g  A user  writes a cron  job to check the health of swift by listing a  predefined container every 1 minute.    This will obviously create a token every minute.



2)      Also  I like to understand how rate limiting is done for v3 tokens.   Rate limiting involves source ip + request pattern.  In V3 there are so many ways to get the token and the rate limiting becomes too complex


Rate limit the number of requests to POST /v2.0/tokens and POST /v3/auth/tokens

Just for unscoped token,  all the following are equivalent requests.   In case of scoped tokens we have even more combinations.   Rouge clients can easily mess with rate limiting by mixing request patterns. Also rate limiting across regions may not be possible.

a.        UserId/Password

b.       UserName/Password/domainId

c.       UserName/Password/DomainName

Thanks
Haneef

From: Ravi Chunduru [mailto:ravivsn at gmail.com<mailto:ravivsn at gmail.com>]
Sent: Tuesday, June 18, 2013 11:02 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

I agree we need a way to overcome these rogue clients but by rate limiting genuine requests will get effected. Then one would need retries and some times critical operations gets failed. It beats the whole logic of being available.


About the keyrings, How do we tackle if a service is using JSON API calls and not python clients?

Thanks,
-Ravi.
On Tue, Jun 18, 2013 at 6:37 PM, Adam Young <ayoung at redhat.com<mailto:ayoung at redhat.com>> wrote:
On 06/18/2013 09:13 PM, Kant, Arun wrote:
The issue with having un-managed number of tokens for same credential is that it can be easily exploited. Getting a token is one of initial step (gateway) to get access to services. A rogue client can keep creating unlimited number of tokens and possibly create denial of service attack on services. If there are somewhat limited number of tokens, then cloud provider can possibly use tokenId based rate-limiting approach.
Better here to rate limit, then.




Extending the expiry to some fixed interval might be okay as that can be considered as continuing user session similar to what is seen when a user keeps browsing an application while logged in.
Tokens are resources created by Keystone.  No reason to ask to create something new if it is not needed.

The caching needs to be done client side.  We have ongoing work using python-keyring to support that.

-Arun


From: Adam Young <ayoung at redhat.com<mailto:ayoung at redhat.com>>
Reply-To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, June 14, 2013 3:33 PM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Keystone][Folsom] Token re-use

On 06/13/2013 07:58 PM, Ravi Chunduru wrote:
Hi,
  We are having Folsom setup and we find that our token table increases a lot. I understand client can re-use the token but why doesnt keystone reuse the token if client asks it with same credentials..
I would like to know if there is any reason for not doing so.

Thanks in advance,
--
Ravi


_______________________________________________

OpenStack-dev mailing list

OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
You can cache the token on the client side and reuse. Tokens have a an expiry, so if you request a new token, you extend the expiry.


_______________________________________________

OpenStack-dev mailing list

OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ravi

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ravi


_______________________________________________

OpenStack-dev mailing list

OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ravi




_______________________________________________

OpenStack-dev mailing list

OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130621/f7563c88/attachment.html>


More information about the OpenStack-dev mailing list