<tt><font size=2>Tom Fifield <tom@openstack.org> wrote on 25/02/2015
06:46:13 AM:<br>
<br>
> On 24/02/15 19:27, Daniel P. Berrange wrote:<br>
> > On Tue, Feb 24, 2015 at 12:05:17PM +0100, Thierry Carrez wrote:<br>
> >> Daniel P. Berrange wrote:<br>
> >>> [...]<br>
> <br>
> > I'm not familiar with how the translations works, but if they
are<br>
> > waiting until the freeze before starting translation work I'd<br>
> > say that is a mistaken approach. Obviously during active dev
part<br>
> > of the cycle, some translated strings are in flux, so if translation<br>
> > was taking place in parallel there could be some wasted effort,
but<br>
> > I'd expect that to be the minority case. I think the majority
of<br>
> > translation work can be done in parallel with dev work and the
freeze<br>
> > time just needs to tie up the small remaining bits.<br>
> <br>
> <br>
> So, two points:<br>
> <br>
> 1) We wouldn't be talking about throwing just a couple of percent
of<br>
> their work away.<br>
> <br>
> As an example, even without looking at the introduction of new strings<br>
> or deleting others, you may not be aware that changing a single word
in<br>
> a string in the code means that entire string needs to be re-translated.<br>
> Even with the extensive translation memory systems we have making<br>
> suggestions as best they can, we're talking about very, very significant<br>
> amounts of "wasted effort".</font></tt>
<br>
<br><tt><font size=2>How difficult would it be to try quantifying this
"wasted effort"? For example, if someone could write a script
that extracts the data for a histogram showing the amount of strings (e.g.,
in Nova) that have been changed/overridden in consequent patches up to
1 week apart, between 1 and 2 weeks apart, and so on up to, say, 52 weeks.</font></tt>
<br>
<br><tt><font size=2>Regards,</font></tt>
<br><tt><font size=2>Alex</font></tt>
<br>
<br>
<br><tt><font size=2>> Regards,<br>
> <br>
> <br>
> Tom<br>
</font></tt>