new d3d9/device.ok test always fails here, but not a regression?

Henri Verbeet hverbeet at gmail.com
Tue Jan 3 08:29:25 CST 2012


I had hoped this was one of those threads that would go away if I
ignored it. Oh well.
I actually think there's value in handling anything that breaks an
application as a regression. Despite claims to the contrary, we like
users to test releases and find regressions sooner rather than later.
If applications are broken for extended periods of time that only
encourages users / potential testers to stick with outdated releases,
and it doesn't really matter there if something broke because a patch
itself was bad or if that patch just exposed some other broken code.
(As an aside, the latter seems much more common.)

Unfortunately doing things like that requires a certain amount of good
judgment about what is reasonable to mark as a regression and what
isn't, as opposed to rigidly applying a set of rules. For example, if
adding a new dll exposes the fact that we need a HLSL compiler, that
isn't really going to benefit from being marked as regression, it's
just something that practically everyone already knows, and just takes
a certain amount of effort. On the other hand, if it's something that
can easily be fixed by e.g. adding a stub, or just fixing a bug
somewhere else, I think that makes sense. (Although if it's easy
enough and you're a developer anyway, you might as well just fix it
yourself.) Evidently this is hard for people.

Fundamentally, keywords are a tool for developers to solve bugs more
efficiently. They're explicitly not some kind of "management" tool, or
a larger megaphone for users to shout "LOOK HERE, MY BUGS ARE
IMPORTANT!!1". (Incidentally, there are actually people you can pay to
prioritize your bugs, but even those will tell you to go away if you
try to be too much of a dick about it.) Similarly, joking about the
"Hall of Shame" is all good fun, but the main purpose of that page is
to give people a good way of keeping track of their regressions, since
IMO plain bugzilla isn't very good at it.

Now, about the original bug this thread was about. I obviously run the
D3D tests myself almost daily before sending patches. I also run those
regularly (but less often) on a couple of other machines with
different configurations, and on Windows. I watch test.winehq.org from
time to time for new failures in the D3D tests. (On that subject, it
would probably be nice if we could see a history of changes in
failures for specific tags, to see how consistent those failures are.)
The test.winehq.org data shows that it fails for some people, but
passes for most. It consistently passes on Windows. It consistently
passes on all my boxes. This all means the following things:
    - The only way that bug report was going to tell me something I
didn't already know was if I didn't watch test.winehq.org (wtf) and
either didn't run the tests myself (wtf) or they didn't fail for me
(in which case I wouldn't be able to reproduce the bug with the
information in the report).
    - Given that it consistently passes on Windows, and given the
nature of the test, it's probably more likely that there's an issue
with e.g. event delivery somewhere than with the test itself, or even
anything D3D related.
    - The most useful thing someone who can consistently reproduce
that bug can do is to try to debug it, asking for help where needed.
Failing that, figuring out what makes it consistently reproducible
would be a good second.
    - There's no possible way that adding the regression keyword to
that bug is going to help anything.



More information about the wine-devel mailing list