[Bug 11203] Versions 9.52 & 9.53 throw page exception on Enterprise Architect

wine-bugs at winehq.org wine-bugs at winehq.org
Sat Aug 15 14:54:05 CDT 2009


http://bugs.winehq.org/show_bug.cgi?id=11203





--- Comment #34 from Stefan Dösinger <stefandoesinger at gmx.at>  2009-08-15 14:53:52 ---
32 bpp == 24 bits color information + 8 byte padding. There's a difference
between color depth and framebuffer size on X. Windows doesn't properly
separate them. This is where all this confusion comes from, and even I don't
get it right all the time.

GDI objects always have at most 8 bits per channel color information. However,
since 3 byte aligned data is hard to manage, graphics cards usually pad the
information to 4 bytes, adding 8 junk bits. The size of the image grows, the
data it stores doesn't. You can essentially see this as a tradeoff of
comparably cheap video memory(you don't need more than 4-8 MB for GDI really)
for faster processing.

X usually writes this during startup:
(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32

That means that there are 24 bits color depth, and a pixel takes 32 bits. 8
bits are unused. This setup is known on Windows as "32 bpp". This line is
driver specific, so the Intel driver might write something different.

The reason why games usually check for 32 bpp is:

*) Because they're stupid and don't know what they're doing
*) OpenGL and D3D can use the 8 fill bytes for backbuffer alpha, allowing
additional blend modes. There are better ways to check for support for that, so
it comes down to "games are stupid" again.

The D3D formats have a nicer representation:

D3DFMT_R5G6B5: Usually known as 16 bpp
D3DFMT_R8G8B8: This is packed RGB8, known as 24 bpp. Very few cards support it
these days for framebuffers, and no card I know supports it for textures.
D3DFMT_X8R8G8B8: This is known as 32 bpp. It contains the same information as
the former format, and 8 unused filler bits
D3DFMT_A8R8G8B8: This isn't available in GDI - it uses the 8 filler bits for
alpha storage. However, the back buffer can be in this format. The front buffer
is always in X8R8G8B8 format. If the back buffer is presented, the alpha data
is copied over(simple memcpy in the GPU, or just a pointer change for a flip),
and the A channel becomes an X channel and is just ignored.

On newer cards(d3d10 class) there is also this:
D3DFMT_A2R10G10B10: Also a 32 bpp mode, 30 bits color information - just
interesting for D3D, not too interesting for GDI. This format can be used for
back buffers I think, but usually it is used for temporary results that need
higher precision. Afaik monitors are still in 24 bit color depth mode, which
was once considered more than the human eye can see - although not everyone
agrees with that.

But the 24->32 bit patch was mostly about making wine separate the two elements
in this line:
(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32

Advertise this as 32 bpp to the app, and use 24 bit depth for X11 objects. Its
up to X11 how to store it - it will usually store it as 24+8, and I think
shared memory access can talk that way, but I am not 100% sure about this. Wine
has conversion functions, although that tends to slow things down.

-- 
Configure bugmail: http://bugs.winehq.org/userprefs.cgi?tab=email
Do not reply to this email, post in Bugzilla using the
above URL to reply.
------- You are receiving this mail because: -------
You are watching all bug changes.


More information about the wine-bugs mailing list