[infra][upstream-institute] Bot to vote on changes in the sandbox
Hi All, During the Berlin forum the idea of running some kinda of bot on the sandbox [1] repo cam up as another way to onboard/encourage contributors. The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change Showing new contributors approximately what code review looks like[2], and also reduce the human requirements. The OpenStack Upstream Institute would make use of the bot and we'd also use it as an interactive tutorial from the contributors portal. I think this can be done as a 'normal' CI job with the following considerations: * Because we want this service to be reasonably robust we don't want to code or the job definitions to live in repo so I guess they'd need to live in project-config[4]. The bot itself doesn't need to be stateful as gerrit comments / meta-data would act as the store/state sync. * We'd need a gerrit account we can use to lodge these votes, as using 'proposal-bot' or tonyb would be a bad idea. My initial plan would be to develop the bot locally and then migrate it into the opendev infra once we've proven its utility. So thoughts on the design or considerations or should I just code something up and see what it looks like? Yours Tony. [1] http://git.openstack.org/cgit/openstack-dev/sandbox [2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be. [3] So it would a) be faster than typical and b) not all new changes are greeted with a -1 ;P [4] Another repo would be better as project-config is trusted we can't use Depends-On to test changes to the bot itself, but we need to consider the bots access to secrets
Hi All, During the Berlin forum the idea of running some kinda of bot on the sandbox [1] repo cam up as another way to onboard/encourage contributors.
The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change
Showing new contributors approximately what code review looks like[2], and also reduce the human requirements. The OpenStack Upstream Institute would make use of the bot and we'd also use it as an interactive tutorial from the contributors portal.
I think this can be done as a 'normal' CI job with the following considerations:
* Because we want this service to be reasonably robust we don't want to code or the job definitions to live in repo so I guess they'd need to live in project-config[4]. The bot itself doesn't need to be stateful as gerrit comments / meta-data would act as the store/state sync. * We'd need a gerrit account we can use to lodge these votes, as using 'proposal-bot' or tonyb would be a bad idea. do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say
On Fri, 2019-02-01 at 15:33 +1100, Tony Breeds wrote: pep8 or some simple test like check the commit message for Close-Bug: or somting like that. i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml@31 so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1 the authour could then add the second +2 and +w to complete the normal workflow. as far as i know the sandbox repo allowas all users to +2 +w correct?
My initial plan would be to develop the bot locally and then migrate it into the opendev infra once we've proven its utility.
So thoughts on the design or considerations or should I just code something up and see what it looks like?
Yours Tony.
[1] http://git.openstack.org/cgit/openstack-dev/sandbox [2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be. [3] So it would a) be faster than typical and b) not all new changes are greeted with a -1 ;P [4] Another repo would be better as project-config is trusted we can't use Depends-On to test changes to the bot itself, but we need to consider the bots access to secrets
On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote:
do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say pep8 or some simple test like check the commit message for Close-Bug: or somting like that.
I think that's basically what he was suggesting: a Zuul job which votes on (some) changes to the openstack/sandbox repository. Some challenges there... first, you'd probably want credentials set as Zuul secrets, but in-repository secrets can only be used by jobs in safe "post-review" pipelines (gate, promote, post, release...) to prevent leakage through speculative execution of changes to those job definitions. The workaround would be to place the secrets and any playbooks which use them into a trusted config repository such as openstack-infra/project-config so they can be safely used in "pre-review" pipelines like check.
i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml@31
so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1 [...]
It would be a little weird to have those code review votes showing up for the Zuul account and might further confuse students. Also, what you describe would require a custom pipeline definition as those behaviors apply to pipelines, not to jobs. I think Tony's suggestion of doing this as a job with custom credentials to log into Gerrit and leave code review votes is probably the most workable and least confusing solution, but I also think a bulk of that job definition will end up having to live outside the sandbox repo for logistical reasons described above. -- Jeremy Stanley
On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote:
do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say pep8 or some simple test like check the commit message for Close-Bug: or somting like that.
I think that's basically what he was suggesting: a Zuul job which votes on (some) changes to the openstack/sandbox repository.
Some challenges there... first, you'd probably want credentials set as Zuul secrets, but in-repository secrets can only be used by jobs in safe "post-review" pipelines (gate, promote, post, release...) to prevent leakage through speculative execution of changes to those job definitions. The workaround would be to place the secrets and any playbooks which use them into a trusted config repository such as openstack-infra/project-config so they can be safely used in "pre-review" pipelines like check.
i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml@31
so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1
[...]
It would be a little weird to have those code review votes showing up for the Zuul account and might further confuse students. Also, what you describe would require a custom pipeline definition as those behaviors apply to pipelines, not to jobs. yes i was suggsting a custom pipeline.
I think Tony's suggestion of doing this as a job with custom credentials to log into Gerrit and leave code review votes is probably the most workable and least confusing solution, but I also think a bulk of that job definition will end up having to live outside the sandbox repo for logistical reasons described above. no disagreement that that might be a better path. when i hear both i think some long lived thing like an irc bot that would
On Fri, 2019-02-01 at 12:34 +0000, Jeremy Stanley wrote: presumably have to listen to the event queue. so i was just wondering if we could avoid having to wite an acutal "bot" application and just have zuul jobs do it instead.
On 2019-02-01 12:54:32 +0000 (+0000), Sean Mooney wrote: [...]
when i hear bot i think some long lived thing like an irc bot that would presumably have to listen to the event queue. so i was just wondering if we could avoid having to wite an acutal "bot" application and just have zuul jobs do it instead.
Yes, we have a number of stateless/momentary processes like Zuul jobs and Gerrit hook scripts which get confusingly referred to as "bots," so I've learned to stop making such assumptions where that term is bandied about. -- Jeremy Stanley
On Fri, Feb 01, 2019 at 12:34:20PM +0000, Jeremy Stanley wrote:
On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote:
do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say pep8 or some simple test like check the commit message for Close-Bug: or somting like that.
I think that's basically what he was suggesting: a Zuul job which votes on (some) changes to the openstack/sandbox repository.
Some challenges there... first, you'd probably want credentials set as Zuul secrets, but in-repository secrets can only be used by jobs in safe "post-review" pipelines (gate, promote, post, release...) to prevent leakage through speculative execution of changes to those job definitions. The workaround would be to place the secrets and any playbooks which use them into a trusted config repository such as openstack-infra/project-config so they can be safely used in "pre-review" pipelines like check.
Yup that was my plan. It also means that new contributors can't accidentallt break the bot :)
i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml@31
so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1 [...]
It would be a little weird to have those code review votes showing up for the Zuul account and might further confuse students. Also, what you describe would require a custom pipeline definition as those behaviors apply to pipelines, not to jobs.
I think Tony's suggestion of doing this as a job with custom credentials to log into Gerrit and leave code review votes is probably the most workable and least confusing solution, but I also think a bulk of that job definition will end up having to live outside the sandbox repo for logistical reasons described above.
Cool. There clearly isn't a rush on this but it would be really good to have it in place before the Denver summit. Can someone that knows how either create the gerrit user and zuul secrets or point me at how to do it. Yours Tony.
On Fri, Feb 01, 2019 at 11:25:47AM +0000, Sean Mooney wrote:
Hi All, During the Berlin forum the idea of running some kinda of bot on the sandbox [1] repo cam up as another way to onboard/encourage contributors.
The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change
Showing new contributors approximately what code review looks like[2], and also reduce the human requirements. The OpenStack Upstream Institute would make use of the bot and we'd also use it as an interactive tutorial from the contributors portal.
I think this can be done as a 'normal' CI job with the following considerations:
* Because we want this service to be reasonably robust we don't want to code or the job definitions to live in repo so I guess they'd need to live in project-config[4]. The bot itself doesn't need to be stateful as gerrit comments / meta-data would act as the store/state sync. * We'd need a gerrit account we can use to lodge these votes, as using 'proposal-bot' or tonyb would be a bad idea. do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say
On Fri, 2019-02-01 at 15:33 +1100, Tony Breeds wrote: pep8 or some simple test like check the commit message for Close-Bug: or somting like that.
Yup sorry for using the overloaded term 'Bot' what you describe is what I was trying to suggest.
i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml@31
Yup.
so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1
the authour could then add the second +2 and +w to complete the normal workflow. as far as i know the sandbox repo allowas all users to +2 +w correct?
Correct. Yours Tony.
Tony- Thanks for following up on this!
The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change
As I mentioned in the room, to give a realistic experience the bot should wait two or three weeks before tendering its -1. I kid (in case that wasn't clear).
2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change
If you're compiling a list of eventual features for the bot, another one that could be neat is, after the second patch set, the bot merges a change that creates a merge conflict on the student's patch, which they then have to go resolve. Also, cross-referencing [1], it might be nice to update that tutorial at some point to use the sandbox repo instead of nova. That could be done once we have bot action so said action could be incorporated into the tutorial flow.
[2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be.
Yeah, I had been assuming it would be some tag in the commit message. If we ultimately enact different flows of varying complexity, the tag syntax could be enriched so students in different courses/grades could get different experiences. For example: Bot-Reviewer: <name-of-osi-course> or Bot-Reviewer: Level 2 or Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 The possibilities are endless :P -efried [1] https://review.openstack.org/#/c/634333/
On Fri, Feb 01, 2019 at 08:25:03AM -0600, Eric Fried wrote:
Yeah, I had been assuming it would be some tag in the commit message. If we ultimately enact different flows of varying complexity, the tag syntax could be enriched so students in different courses/grades could get different experiences. For example:
Bot-Reviewer: <name-of-osi-course>
or
Bot-Reviewer: Level 2
or
Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3
Something like that would work well. A nice thing about it is it begins the process of teaching about other tags we but in commit messages.
The possibilities are endless :P
:) Of course it should be Auto-Bot[1] instead of Bot-Reviewer ;P Yours Tony. [1] The bike shed it pink!
On Fri, Feb 1, 2019 at 6:26 AM Eric Fried <openstack@fried.cc> wrote:
Tony-
Thanks for following up on this!
The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change
As I mentioned in the room, to give a realistic experience the bot should wait two or three weeks before tendering its -1.
I kid (in case that wasn't clear).
2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change
If you're compiling a list of eventual features for the bot, another one that could be neat is, after the second patch set, the bot merges a change that creates a merge conflict on the student's patch, which they then have to go resolve.
Another, other eventual feature I talked about with Jimmy MacArthur a few weeks ago was if we could have the bot ask the new contributors how it was they got to this point in their contributions? Was it self driven? Was it a part of OUI, was it from other documentation? Would be interesting to see how our new contributors are making their way in so that we can better help them/fix where the system is falling down. Would also be really interesting data :) And who doesn't live data?
Also, cross-referencing [1], it might be nice to update that tutorial at some point to use the sandbox repo instead of nova. That could be done once we have bot action so said action could be incorporated into the tutorial flow.
[2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be.
Yeah, I had been assuming it would be some tag in the commit message. If we ultimately enact different flows of varying complexity, the tag syntax could be enriched so students in different courses/grades could get different experiences. For example:
Bot-Reviewer: <name-of-osi-course>
or
Bot-Reviewer: Level 2
or
Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3
The possibilities are endless :P
-efried
On Thu, Feb 07, 2019 at 12:02:48PM -0800, Kendall Nelson wrote:
Another, other eventual feature I talked about with Jimmy MacArthur a few weeks ago was if we could have the bot ask the new contributors how it was they got to this point in their contributions? Was it self driven? Was it a part of OUI, was it from other documentation? Would be interesting to see how our new contributors are making their way in so that we can better help them/fix where the system is falling down.
Would also be really interesting data :) And who doesn't live data?
We could do that. Do you think it should block the 'approval' of the sandbox change or would it be a purely optional question/response? Yours Tony.
@Tony: Thank you for working on this!
[…]
2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change
If you're compiling a list of eventual features for the bot, another one that could be neat is, after the second patch set, the bot merges a change that creates a merge conflict on the student's patch, which they then have to go resolve.
Also, cross-referencing [1], it might be nice to update that tutorial at some point to use the sandbox repo instead of nova. That could be done once we have bot action so said action could be incorporated into the tutorial flow.
[2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be.
Yeah, I had been assuming it would be some tag in the commit message. If we ultimately enact different flows of varying complexity, the tag syntax could be enriched so students in different courses/grades could get different experiences. For example:
Bot-Reviewer: <name-of-osi-course>
or
Bot-Reviewer: Level 2
or
Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3
The possibilities are endless :P
By having tags we can turn off the bot for the in person trainings while we can also help people practice different things outside of trainings so I really like the approach! Once we have prototype working we can also think of putting some more pointers in the training slides to the Contributor Guide sections describing how to manage open reviews/changes to make sure people find it. Thanks, Ildikó
-efried
participants (6)
-
Eric Fried
-
Ildiko Vancsa
-
Jeremy Stanley
-
Kendall Nelson
-
Sean Mooney
-
Tony Breeds