We *really* need a development model change !

Andriy Palamarchuk apa3a at yahoo.com
Thu Jan 3 06:54:15 CST 2002


Alexandre Julliard wrote:
> Andriy Palamarchuk <apa3a at yahoo.com> writes:
>>1) look at file test1.pl. It implements exactly the
>>functionality of existing test.pl module with using
>>Test::Simple framework. The only change I made are
>>descriptive error messages for the first few tests. 
>>
>>Output of test1.pl:
>>ok 1 - Valid atom handle
>>ok 2 - No error code defined
>>ok 3
>>ok 4 - Succeed code
>>ok 5 - Atom name
>>ok 6
>>ok 7
>>ok 8
>>ok 9
>>ok 10
>>ok 11
>>ok 12
>>ok 13
>>ok 14
>>ok 15
>>1..15
>>
>>The basic usage is not more difficult than one you
>>suggested, right?
>>
> 
> Yes, using ok() or assert() is pretty much the same.

> But it should not
> be printing all that stuff IMO, except if you 
> explicitly ask it to
> when debugging a test for instance. 

The 'ok' messages are used by the Test::Harness module
to track progress of the test. E.g. it can report
point of crash if the test silently dies by knowing
the last reported test.
You should use Test::Harness module to configure
report as you like.

[...]
>>3) Things become even more interesting when
>>Test::Simple is used with module Test::Harness.
>>Test::Harness allows to run many tests at once and
>>consolidate results of these tests.
>>test_all.pl uses the module to run all the tests
>>(currently test2.pl only). The output of the script:
>>
>>test2.p.............#     Failed test (test2.pl at
>>line 8)
>># Looks like you failed 1 tests of 4.
>>dubious
>>	Test returned status 1 (wstat 256, 0x100)
>>DIED. FAILED tests 2-3
>>	Failed 2/4 tests, 50.00% okay
>>Failed Test  Status Wstat Total Fail  Failed  List
of
>>failed
>>-------------------------------------------------------------------------------
>>test2.pl          1   256     4    2  50.00%  2-3
>>Failed 1/1 test scripts, 0.00% okay. 2/4 subtests
>>failed, 50.00% okay.
>>
> 
> I really don't see a need for this kind of things. 
> IMO we should
> enforce that tests always succeed, otherwise 
> we can't do regression testing.

Always succeed *under Windows*. Do you really, really,
really think all the tests will succeed under Wine
from day 1 and we will be able to maintain them
failure-free?

The value of unit tests is exactly in failures! The
more failures of unit tests we have - the better test
developers do their work.

The whole programming methodology exists which
dictates that you write tests first, then implement
code which makes them succeed.
Please, look at this short article to better
understand my point of view:
"Test Infected: Programmers Love Writing Tests"
http://members.pingnet.ch/gamma/junit.htm


> And running the tests through the Makefile 
> has the advantage
> that you can check dependencies and only run tests 
> when something
> affecting them has changed.
 
Wrong. In make files you can check only some
compilation dependencies. You can't do anything about
logical dependencies. I'm for running the whole suite
of the tests by developers before submitting *any*
patch and from time to time centrally.

Think about the Wine unit tests as a specification of
Win32 API. When you get such reference the first thing
you want to do is to find all the areas where Wine
does not correspond to the specification, right? Then
you repeat such check after changes to be sure nothing
is broken.
This is a great reference - self-controlling, growing
with each usage.

This is why I want very much to keep it in separate
directory. Even the tests structure, probably, should
correspond more to logical Win32 API structure, not to
Wine directory tree.
The Win32 unit test suite has huge value by itself.
All Win32 projects can contribute to it on the same
rights. The ideal solution, probably, whould be to
move completely separate project. We can do this later
if the need arises.

Andriy Palamarchuk

__________________________________________________
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com




More information about the wine-devel mailing list