ATI Opengl regression (DRI?)

Aric Cyr Aric.Cyr at gmail.com
Thu Dec 15 19:26:29 CST 2005


Raphael <fenix <at> club-internet.fr> writes:

> 
> On Thursday 15 December 2005 19:55, Jesse Allen wrote:
> > Hi,
> >
> > It seems that the patch git-1399edb0925966a802a6a39835025c22c22c18e1.patch
> > found here
> > http://www.winehq.org/pipermail/wine-cvs/2005-December/019731.html causes
> > an opengl regression on my system.  With the patch loading War3 causes X
> > Error of failed request:  GLXUnsupportedPrivateRequest
> >   Major opcode of failed request:  143 (GLX)
> >   Minor opcode of failed request:  17 (X_GLXVendorPrivateWithReply)
> >   Serial number of failed request:  429
> >   Current serial number in output stream:  429
> >
> > Which seems to stop the game loading thread and causes the game to use
> > the fail-safe thread "Please insert disc", so the game wont load.
> > Reversing the patch fixes the problem.
> >
> > I have a Radeon 9200 using DRI snapshots about 20051024.
> >
> > X Window System Version 6.8.99.901 (6.9.0 RC 1) (Minimal DRI build from
> > X.org tr ee)
> > Release Date: 18 October 2005 + cvs
> > X Protocol Version 11, Revision 0, Release 6.8.99.901
> > Build Operating System: Linux 2.6.14-rc5 i686 [ELF]
> > Current Operating System: Linux tesore 2.6.15-rc4-git1 #1 PREEMPT Fri Dec 2
> > 17:0 3:32 MST 2005 i686
> > Build Date: 28 October 2005
> >
> > Is this a DRI problem?
> 
> No, only that DRI don't implement GLX 1.3 
> i just sent a patch to fix (ie. by-pass) this regression.

You really don't need to use glXQueryServerString() and glXQueryClientString().
 It would be better, easier (not strcmp) and more correct to just use
glXQueryVersion().  glXQueryVersion will automatically report the version common
to both the client and server (so in this case 1.2).

Another thing I don't understand in your patch is why you have wine_glx_t and
wine_glx defined at global scope.  It looks like the only place in your patch
they are used is in wgl.c, so why not define wine_glx_t in wgl.c and make
wine_glx static?  Sorry if I am missing something.

(Also there is some DEPTH_BITS hack in internal_glGetIntegerv which I assume is
unrelated to this GLX patch?)

> i thought that DRI implemented GLX 1.3 specs but seems they use a too older x 
> code :(
> http://cvs.sourceforge.net/viewcvs.py/dri/xc/xc/programs/Xserver/GL/glx/

Too old perhaps, but that what DRI (and hence ATI) are using.  Both support most
of the 1.3 features so there really isn't much of an issue.  The problem is that
we cannot assume 1.3 regardless of how old 1.2 is.  The reason for the glx
version is so that we can do the right thing in any case.

Also glx 1.4 isn't an official standard.  It is still a draft.  There is an
interesting recent thread about this at
http://lists.freedesktop.org/archives/xorg/2005-November/011279.html

Also seems like we should be relying on glXGetProcAddress() but need to be using
glXGetProcAddressARB() since nVidia apparently doesn't export the former.

Regards,
  Aric




More information about the wine-devel mailing list