We *really* need a development model change !

Andriy Palamarchuk apa3a at yahoo.com
Wed Dec 26 13:27:41 CST 2001


--- Andreas Mohr <andi at rhlx01.fht-esslingen.de> wrote:
> On Wed, Dec 26, 2001 at 10:07:20AM -0800, Andriy
> Palamarchuk wrote:
> > Andreas Mohr wrote:

[... skipped ...]

> > - it would be better if the suite print summary
> > information and information about failed tests
> only
> Yep. Current output is something like:
>
WINETEST:test:Loader_16:LoadModule:FAILED:01:[retval]
>
WINETEST:test:Loader_16:LoadModule:FAILED:12:[retval]
>
WINETEST:test:Loader_16:LoadModule:FAILED:13:[retval]
> 
> or, in case of success, only:
> WINETEST:test:Loader_16:LoadModule:OK
I mean something like: 
===================
Run: 1234 tests
Failed: 2 Errors: 1

Fail 1: <....>
Fail 2: <....>
Error 1: <....>
===================
In the example above failture means condition check
failure, Error - exception.

I suggest to print nothing for successfull tests. At
least this is the way I am accustomed with JUnit.
We are not interested in successfull tests, are we?
;-)

> This output is pretty useful, I think:
> It can be parsed *very* easily, and grepping for
> regressions is also pretty
> easy.
> 
> "WINETEST" exists to be able to distinguish this
> output from bogus Wine
> messages,
> "test" indicates that this is a test line output
> versus a warning message or
> similar output,
> "Loader_16" indicates testing of 16bit loader
> functionality,
> "LoadModule" - well... ;-)
> "FAILED" - obvious
> "01" - test number 01 failed.
> "[retval]" - contains the (wrong) return value of
> the function, if applicable.

Looks simple and the output is really useful. I just
don't see any reason to show information about
successfull tests.
At least we can get short form of the output by
clipping all "Ok" messages from your suggested form.

> BTW, I think having a test suite wouldn't be about
> hunting regressions
> at first: just look at my LoadModule16 example and
> you'll see that we're
> still quite far from hunting regressions *only*.
> My guess is that we'll be shocked at how many
> functions fail in how many ways.

Agree, agree, agree... We can even use eXtreme
Programming approaches :-) See
http://xprogramming.com/ and other sites on the subj.
I also like this article:
http://members.pingnet.ch/gamma/junit.htm
I use JUnit extensively and like the whole idea.

> > - make the test suite more "visible" for existing
> > developers. Ask them to run the test suite before
> > submitting a patch?
> No, I don't think so.
> I think it suffices if Alexandre runs the test suite
> before or after every
> large commit cycle.
> That way he'd be able to back out problematic
> patches.
> Asking developers to run the *whole* test suite for
> each patch could be
> pretty painful.

I don't see why running the unit tests is paintful.
I'd estimate that it would not take more then 5
minutes to test all 12000 W32 functions. We also can
keep tests for slow/rarely changed areas of API in
separate "complete" suite.

I think the test suite is for developers, not for
Alexandre (I meas as a team leader :-) or QA. This is
why I want to increase "visibility" of unit tests.
Again, the developers will more likely to contribute
to the suite if they will remember about it.

I do not suggest to enforce the unit test usage
because we'll always have developers/companies who
don't want to do that. It would suffice to recomment
before submitting a patch to check that we have the
same (accidentally - less :-) number of failures as we
had before or report any new bugs introduced.
It is even Ok to have increased number of issues as
soon as developer consiously makes decision to break
something. Compact tests output I describe above also
will help to quicky identify any changes in unit tests
output.

> We'd also need to pass a winver value to the test
> suite via command line
> in order to let the test app adapt to different
> windows environments
> (and thus also to different wine --winver settings
> !).

Sounds good.

> > - it would be greate to have functionality to
> support
> > output comparison? For some functionality it is
> easier
> > to write tests to compare output instead of doing
> > explicit checks (e.g. tests, involving a few
> > processes). The output can be redirected to file
> and
> > files compared. If we use files we need to store
> files
> > for Wine and a few versions of Windows :-(
> Hmm, I don't quite get what exactly you're talking
> about.

Example: I have pretty big unit test for
SystemParametersInfo function. Part of the test is to
insure that WM_SETTINGCHANGE window message is fired
when necessary. I have simple handler for the message
which prints confirmation when the message received. I
save output when I run tests under Windows and Wine
and compare the output. Advantages - 1) simplicity, 2)
I can see contents of the failure. To do explicit
check I need to set up some communication (common
variable, step counter etc) between the message
handler and testing code. If these 2 code snippets are
in different processes I need to use IPC to do
explicit check?

Ideally I'd like to pring nothing to the screen -
developer does not need to see all this information.
The information can be saved to file, and I need to
keep separate files for Wine, (a few versions of ?)
Windows.

> > - what about running the suite weekly (or daily)
> > automatically and publishing the results to
> > wine-devel?
> Good idea ! Might prove worthwhile.

For this feature compact output is useful too.

> > - most developers on this list have access to one
> > version of Windows. Is it difficult to create
> "testing
> > farm" with remote access to a few versions of
> windows?
> > This would help developers to test their code on a
> few
> > platforms. Existing environments in the companies,
> > involved in the project can be used.
> Hmm, why ?
> The idea is that hundreds (or hopefully thousands ?)
> of volunteer Windows
> developers create bazillions of test functions for
> specific API functions.
> That will happen on specific Windows version only,
> of course.
> Now we have a test framework for a specific API
> function on a specific
> Windows version.
> Now if there are behavioral conflicts on different
> Windows versions
> (functions behave differently), then I guess people
> will notice immediately
> and fix the test function immediately to support
> different behaviour of
> different windows versions.
> --> no problem at all.

I was thinking not about unit tests only. Sometimes
I'd like to know how different version of Windows
behaves. The only option I have is to ask somebody who
has such version to run a test (honestly - up to now I
was lazy to ask anybody :-). But you are right - it is
not a big issue.

> > - I remember long time ago there was a post on
> > wine-devel about using Perl or Perl-like language
> for
> > unit testing.
> > What is current status of that project?
> Hmm. That'd be programs/winetest/, right ?
Lazy me ;-)

Andriy Palamarchuk

__________________________________________________
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com




More information about the wine-devel mailing list