<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 24, 2015 at 2:51 AM, Walter A. Boring IV <span dir="ltr"><<a href="mailto:walter.boring@hp.com" target="_blank">walter.boring@hp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">On 03/23/2015 01:50 PM, Mike Perez wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On 12:59 Mon 23 Mar , Stefano Maffulli wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On Mon, 2015-03-23 at 11:43 -0700, Mike Perez wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
We've been talking about CI's for a year. We started talking about CI deadlines<br>
in August. If you post a driver for Kilo, it was communicated that you're<br>
required to have a CI by the end of Kilo [1][2][3][4][5][6][7][8]. This<br>
should've been known by your engineers regardless of when you submitted your<br>
driver.<br>
</blockquote>
Let's work to fix the CI bits for Liberty and beyond. I have the feeling<br>
that despite your best effort to communicate deadlines, some quite<br>
visible failure has happened.<br>
<br>
You've been clear about Cinder's deadlines, I've been trying to add them<br>
also to the weekly newsletter, too.<br>
<br>
To the people whose drivers don't have their CI completed in time: what<br>
do you suggest should change so that you won't miss the deadlines in the<br>
future? How should the processes and tool be different so you'll be<br>
successful with your OpenStack-based products?<br>
</blockquote>
Just to be clear, here's all the communication attempts made to vendors:<br>
<br>
1) Talks during the design summit and the meetup on Friday at the summit.<br>
<br>
2) Discussions at the Cinder midcycle meetups in Fort Collins and Austin.<br>
<br>
4) Individual emails to driver maintainers. This includes anyone else who has<br>
worked on the driver file according to the git logs.<br>
<br>
5) Reminders on the mailing list.<br>
<br>
6) Reminders on IRC and Cinder IRC meetings every week.<br>
<br>
7) If you submitted a new driver in Kilo, you had the annoying reminder from<br>
reviewers that your driver needs to have a CI by Kilo.<br>
<br>
And lastly I have made phone calls to companies that have shown zero responses<br>
to my emails or giving me updates. This is very difficult with larger<br>
companies because you're redirected from one person to another of who their<br>
"OpenStack person" is. I've left reminders on given voice mail extensions.<br>
<br>
I've talked to folks at the OpenStack foundation to get feedback on my<br>
communication, and was told this was good, and even better than previous<br>
communication to controversial changes.<br>
<br>
I expected nevertheless people to be angry with me and blame me regardless of<br>
my attempts to help people be successful and move the community forward.<br>
<br>
</blockquote></div></div>
I completely agree here Mike. The Cinder cores, PTL, and the rest of the<br>
community have been talking about getting CI as a requirement for quite some time now.<br>
It's really not the fault of the Cinder PTL, or core members, that your driver got pulled from the Kilo<br>
release, because you had issues getting your CI up and stable in the required time frame.<br>
Mike made every possible attempt to let folks know, up front, that the deadline was going to happen.<br>
<br>
Getting CI in place is critical for the stability of Cinder in general. We have already benefited from<br>
having 3rd Party CI in place. It wasn't but a few weeks ago that a change that was submitted actually<br>
broke the HP drivers. The CI we had in place discovered it, and brought it to the surface. Without<br>
having that CI in place for our drivers, we would be in a bad spot now. </blockquote><div><br></div><div>+1, we (GlusterFS) too discovered issues with live snapshot (being one of the very few that uses it in cinder)<br></div><div>tests failing as part of CI and we fixed it [1]<br><br>[1]: <a href="https://review.openstack.org/#/c/156940/">https://review.openstack.org/#/c/156940/</a><br><br></div><div>thanx,<br></div><div>deepak<br></div></div><br></div></div>