[qa][ptg][patrole] RBAC testing improvement ideas for Patrole

Sergey Vilgelm sergey at vilgelm.info
Mon May 6 01:36:00 UTC 2019

Gmann, thank you so much.

1. I’m not sure that I understood the #1. Do you mean that oslo.policy will raise a special exceptions for successful and unsuccessful verification if the flag is set? So a service will see the exception and just return it. And Patorle can recognize those exceptions?

I’m totally agree with using one job for one services, It can give us a possibility to temporary disable some services and allow patches for other services to be tested and merged.

2. +1 for the option 2. We can decrease the number of jobs and have just one job for one services, but we need to think about how to separate the logs. IMO we need to extend the `action` decorator to run a test 9 times (depends on the configuration) and memorize all results for all combinations and use something like `if not all(results): raise PatroleException()`

Sergey Vilgelm

On May 5, 2019, 2:15 AM -0500, Ghanshyam Mann <gmann at ghanshyammann.com>, wrote:
> Patrole is emerging as a good tool for RBAC testing. AT&T already running it on their production cloud and
> we have got a good amount of interest/feedback from other operators.
> We had few discussions regarding the Patrole testing improvement during PTG among QA, Nova, Keystone team.
> I am writing the summary of those discussions below and would like to get the opinion from Felipe & Sergey also.
> 1. How to improve the Patrole testing time:
> Currently Patrole test perform the complete API operaion which takes time and make Patrole testing
> very long. Patrole is responsible to test the policies only so does not need to wait for API complete operation
> to be completed.
> John has a good idea to handle that via flag. If that flag is enabled (per service and disabled by default) then
> oslo.policy can return some different error code on success (other than 403). The API can return the response
> with that error code which can be treated as pass case in Patrole.
> Morgan raises a good point on making it per API call than global. We can do that as next step and let's
> start with the global flag per service as of now?
> - https://etherpad.openstack.org/p/ptg-train-xproj-nova-keystone
> Another thing we should improve in current Patrole jobs is to separate the jobs per service. Currently, all 5 services
> are installed and run in a single job. Running all on Patrole gate is good but the project side gate does not need to run
> any other service tests. For example, patrole-keystone which can install the only keystone and run only
> keystone tests. This way project can reuse the patrole jobs only and does not need to prepare a separate job.
> 2. How to run patrole tests with all negative, positive combination for all scope + defaults roles combinations:
> - Current jobs patrole-admin/member/reader are able to test the negative pattern. For example:
> patrole-member job tests the admin APIs in a negative way and make sure test is passed only if member
> role gets 403.
> - As we have scope_type support also we need to extend the jobs to run for all 9 combinations of 3 scopes
> (system, project, domain) and 3 roles(admin, member, reader).
> - option1: running 9 different jobs with each combination as we do have currently
> for admin, member, reader role. The issue with this approach is gate will take a lot of time to
> run these 9 jobs separately.
> - option2: Run all the 9 combinations in a single job with running the tests in the loop with different
> combination of scope_roles. This might require the current config option [role] to convert to list type
> and per service so that the user can configure what all default roles are available for corresponding service.
> This option can save a lot of time to avoid devstack installation time as compared to 9 different jobs option.
> -gmann
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190505/f1424afb/attachment-0001.html>

More information about the openstack-discuss mailing list