<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type"/>
</head>
<body>
<p style="margin: 0;">
<span>
<span></span>
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere?
</span>
</p>
<p style="margin: 0;"> </p>
<p style="margin: 0;">
<span>e.g. a folder full of</span>
36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files?
</p>
<p style="margin: 0;"> </p>
<p style="margin: 0;">Then one could write a simple wine regression bisect tool that implements similar semantics to git bisect, but would essentially wrap wget. Then in your server you could have an index file which is a list of the sha commit ids.</p>
<p style="margin: 0;"> </p>
<p style="margin: 0;">This would save the user having to clone a 26Gb repository when most of the commits will be irrelevant.</p>
<p style="margin: 0;"> </p>
<p style="margin: 0;">Extra bonus points for doing a better job of compressing the small deltas between binaries*, rather than compressing full wine builds.</p>
<p style="margin: 0;"> </p>
<p style="margin: 0;">Joel</p>
<p style="margin: 0;"> </p>
<p style="margin: 0;">* Are binaries deterministic like this? or do they tend to be completely scrambled? </p>
<p> </p>
<div style="margin: 5px 0px 5px 0px; font-family: monospace;">
<br/>
On 18 October 2011 at 09:45 Damjan Jovanovic <damjan.jov@gmail.com> wrote:
<br/>
<br/>
> Hi
<br/>
>
<br/>
> Since the beginning, I've had issues with regression testing. Despite the
<br/>
> fact it's very useful, it takes forever, it's easy to make a mistake
<br/>
> (especially during "reverse regression testing"), users find it too long and
<br/>
> technical, and only a small minority of regressions are ever bisected. And
<br/>
> several patches need backporting to allow older versions of Wine to compile
<br/>
> and run on today's make, gcc, and libraries - this is the case even for the
<br/>
> 1.0.x releases from less than 3 years ago!
<br/>
>
<br/>
> The problem is of course compilation. "configure" takes at least 40 seconds,
<br/>
> without any way to speed it up on multi-core CPUs. "make" takes > 5 minutes,
<br/>
> and it's only taking longer as Wine gets bigger. Compilation is
<br/>
> fundamentally complex and technical to users.
<br/>
>
<br/>
> But what if we had precompiled binaries, and regression testing consisted of
<br/>
> just running different versions of Wine?
<br/>
>
<br/>
> Wine binaries take up about 122 MB and take over 5 minutes to compile.
<br/>
> There's now 35770 commits between 36def4af0ca85a1d0e66b5207056775bcb3b09ff
<br/>
> (Release 1.0) and "origin". That's about 4.4 terrabytes of storage and over
<br/>
> 4 months of compilation, if each of those versions had to be compiled and
<br/>
> installed into its own prefix, way beyond what most users are willing or
<br/>
> able to store or do. Most patches however end up affecting only a few binary
<br/>
> files in the end, and compiling successive versions allows "make" to be very
<br/>
> quick.
<br/>
>
<br/>
> So I've written a tool that compiles Wine and adds each commit's binaries
<br/>
> into a Git repository. It knows how to compile old versions of Wine
<br/>
> (currently as far back as 1.0). It knows that commits affecting only
<br/>
> ANNOUNCE, .gitignore, and files in dll/ or programs/ ending with .c and such
<br/>
> don't need to go through the endlessly slow "configure", only "make". It is
<br/>
> stateless: if interrupted, it can resume from the last successful commit. It
<br/>
> works around bugs in GNU make (you won't believe how many there are...).
<br/>
>
<br/>
> This tool compiled all 35000 or so commits from Wine 1.0 to around 4th
<br/>
> October 2011 in only 7 days, generating a Git repository of Wine binaries
<br/>
> that's only 26 gigabytes in size. Regression testing with binaries is a
<br/>
> pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16
<br/>
> step regression in just 20 minutes, and most of that time was spent running
<br/>
> the application and dealing with 2 X-server crashes.
<br/>
>
<br/>
> I haven't figured out how to make the binaries available to users. Few users
<br/>
> can clone a 26 gigabyte repository, and even fewer places can serve that
<br/>
> much to multiple users. Maybe Git can compress it further? The other idea I
<br/>
> had is that users should be able to regression test through a GUI tool.
<br/>
> Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots
<br/>
> for specific commits, instead of having the entire binary repository
<br/>
> locally?
<br/>
>
<br/>
> Any other ideas? Would you like to see this tool? Can I send an attachment
<br/>
> with it?
<br/>
>
<br/>
> Thank you
<br/>
> Damjan Jovanovic
</div>
</body>
</html>