[PATCH] winex11.drv: Fix drawing of layered windows with a client window.

Rémi Bernon rbernon at codeweavers.com
Thu Apr 29 08:22:33 CDT 2021


On 4/29/21 11:27 AM, Giovanni Mascellani wrote:
> Hi,
> 
> Il 27/04/21 11:33, Rémi Bernon ha scritto:
>> As far as I can tell from a quick test with OpenGL rendering, there is 
>> two different behavior on Windows:
>>
>> I'm only considering the layered attribute and calls on top-level 
>> windows, as I couldn't make it work on child windows, to the contrary 
>> to what MSDN says is supported on Windows > 8.
>>
>> 1) When UpdateLayeredWindow is called, the displayed image is the one 
>> that has been provided in the call, regardless of any later drawing 
>> call indeed, be it GDI or OpenGL. If the window has child windows, 
>> they aren't displayed either, regardless of the API they use too.
>>
>> 2) When SetLayeredWindowAttributes has been called, the window is 
>> displayed, with the transparency computed according to the attributes. 
>> If the window has child windows, they are displayed normally, but with 
>> the same transparency transformation. Drawing with GDI or OpenGL on 
>> the top-level or the child windows works as for a normal window, 
>> except for the transparency.
> 
> You're right, I was forgetting about SLWA. With this in mind, I can 
> reproduce the behaviors you describe, except that I didn't test with 
> child windows. On the other hand, I also checked that D3D9 are on the 
> same level as GDI and OpenGL (i.e., they are ignored after ULW). So, I 
> let me summarize again the Windows behavior. Windows can be of three 
> different mutually exclusive types: not layered, ULW-layered and 
> SLWA-layered. Creating a window with WS_EX_LAYERED makes it ULW-layered, 
> otherwise it is not layered (on Windows a ULW-layered window appears 
> immediately, even if it's empty, and it receives events; on Wine it 
> doesn't). Then the painting behavior is:
> 
> * If the window is not layered, ULW and SLWA return error and do 
> nothing. The windows receives WM_PAINT as usual. GDI, OpenGL, D3D9 work 
> as usual and are on the same level, meaning that each of them can 
> overwrite what the others painted. I suppose Vulkan and other D3D 
> versions are analogous. Setting WS_EX_LAYERED on the window makes it 
> ULW-layered. Window decorations are rendered as usual.
> 
> * If the window is ULW-layered, it doesn't receive WM_PAINT and all the 
> other APIs are ignored (at least, they don't draw; I don't know if they 
> produce side-effects, if they have any). The only way to put something 
> in the window is calling ULW. Clearing WS_EX_LAYERED on the window makes 
> it non layered, and calling SLWA makes it SLWA-layered (and generates a 
> WM_PAINT event). Window decorations are not rendered.
> 
> * If the window is SLWA-layered, it receives WM_PAINT as usual. All the 
> other APIs (i.e., not ULW) work as in the non layered case, except that 
> the post-processing set with SLWA (color keying and global alpha) is 
> applied. ULW returns error (there is no way to switch directly to 
> ULW-layered: you have to clear and set WS_EX_LAYERED) and clearing 
> WS_EX_LAYERED of course makes the window not layered. Calling again SLWA 
> (possibly with different settings) does not cause a new WM_PAINT event, 
> but correctly updates the window, which probably means that Windows 
> keeps an offscreen copy of the original buffer. Window decorations are 
> rendered as usual (and are affected by the SLWA alpha).
> 
> We already have a "layered" bit in x11drv_win_data, but I don't think we 
> have a bit to distinguish the two layered variants. I think we should 
> have one, which we would use to deny painting by other APIs when the 
> window is ULW-layered (and, conversely, to allow ULW to paint on the 
> client window, i.e., what my patch was trying to do; or maybe in this 
> case we should just unmap or destroy the client window). Does this look 
> sensible? (I'm not meaning that this bit is the only thing we need to be 
> like Windows, but that at least it is a step forward)
> 
>> Here I would expect the surface flush to only work once until a later 
>> GL swap buffer / VK present override it (although I'm not completely 
>> sure when the Expose events are generated).
>>
>> It may also work differently depending on if the client windows are 
>> off-screen or not (depending on whether it's a client window of a 
>> child or if the top-level has child windows).
>>
>> Although this patch possibly fixes some bad cases I think the layered 
>> window implementation is actually incorrect in a more general way.
>>
>> To make things simpler I think we should maybe make the off-screen 
>> GL/VK rendering the default and do the present ourselves on the 
>> top-level windows, and put the GL/VK on-screen /only/ when we're sure 
>> we can (which is hopefully still going to be quite often). 
> 
> I am not sure I am following you. In which case would off-screen 
> rendering help?
> 
> Thanks, Giovanni.
> 
> 

I think it would help generally, to implement things correctly.

Things are complicated with all sorts of possible cases and combinations 
and I think we are trying to shoehorn the Windows compositing rules to 
something that has its owns.


For instance, when there are multiple GL/VK/D3D drawing chains involved, 
we create client windows for each of them, but keep only the last 
created one on-screen.

With wined3d, I believe all the D3D instances are sharing the same 
client window, so there's only one client window being created and 
alternating between swapchains works fine.

However I if you start combining that with GL calls, or Vulkan, or DXVK 
which creates a Vulkan surface for each swapchain (or D3D12 over Vulkan 
I guess), which I believe is perfectly fine on Windows where the last 
presented surface is visible, regardless how it's been drawn, it doesn't 
work anymore on Wine. Because we create multiple client windows, and any 
previously created one gets discarded, only the last one is kept on-screen.

Then comes GDI drawing, which is a whole story itself, as we have two 
different ways to draw things depending on the situation.

In most cases, we use dibdrv to draw to a memory buffer which is flushed 
to the window from time to time, and another which draws directly to the 
window through X11 API. When there's client windows involved, the 
correct way to draw GDI is to draw over the last presented surface, so 
the second option should be used. But it's less efficient, and gets 
tricky when there's multiple client windows involved.

And things could get even trickier if we consider exclusive full-screen 
behavior (which may be different) or window surface read-backs. And 
layered windows, as you described, also have their own rules.


Overall I think this is pretty brittle because we try to delegate the 
compositing on X11. IMHO we should instead implement it ourselves, by 
using off-screen rendered surfaces and composite things manually when 
needed.

Then, hopefully, in the most common case there's only going to be a 
single drawing API involved, with full-screen (or full-window) draw 
calls, no child window, etc, and we can leverage that to make it fast. 
In such case only we can safely put the surface on-screen and let X11 
present directly.


At least this is my current understanding. I may of course have 
misunderstood or missed things.

Cheers,
-- 
Rémi Bernon <rbernon at codeweavers.com>



More information about the wine-devel mailing list