Hi *, one of our customers has two almost identical clouds (Victoria), the only difference is that one of them has three control nodes (HA via pacemaker) and the other one only one control node. They use terraform to deploy lots of different k8s clusters and other stuff. In the HA cloud they noticed keystone errors when they purged a project (cleanly) and started the redeployment immediately after that. We did some tests to find out which exact keystone cache it is and it seems to be the role cache (default 600 seconds) which leads to an error in terraform, it reports that the project was not found and refers to the previous ID of the project. The same deployment seems to work in the single-control environment without these errors, it just works although the cache is enabled as well. I already tried to reduce the cache_time to 30 seconds but that doesn't help (although it takes more than 30 seconds until terraform is ready after the prechecks). But the downside of disabling the role cache entirely leads to significantly longer response times when using the dashboard or querying the APIs. Is there any way to tune the role cache in a way so we could have both a reasonable performance as well as being able to redeploy projects without a "sleep 600"? Any comments or recommendations are appreciated! Regards, Eugen