User handles changes

Rémi Bernon rbernon at codeweavers.com
Mon Feb 14 15:46:02 CST 2022


On 2/7/22 18:47, Jacek Caban wrote:
> I created a branch with my prototype of implementation:
> 
> https://github.com/jacekcw/wine/tree/user-handles
> 
> I think that this is enough to get a sense of how I think it could work. 
> Those patches still need some work and testing, but it's close enough.
> 
> 
> It implements user handle table using shared memory that is writeable 
> from server, but read-only for clients. Using shared memory for the 
> handles table alone is enough to reimplement things like IsWindow() and 
> GetWindowThreadProcessId(). Handle table entry contains three objects:
> 
> - server object, not much changes here
> 
> - client 'kernel' object. Using windows as an example, this is WND 
> struct. We will want to make it private to win32u (instead of user32), 
> but other than that its mechanism can stay mostly unchanged, except we 
> will need to more careful to match server state closely
> 
> - shared struct representing current state of the object. This replaces 
> some fields of above types, but will also be accessible from PE code.
> 
> 
> user-handles branch reimplements GetWindowLong* on top of shared memory 
> (mostly, it's not complete there, but remaining cases will be similar 
> once complete). This also seems to match how it works on Windows: win32u 
> exposes NtUserSetWindowLong, but no Get* equivalent. The getter can be 
> lock-free because the only race it needs to worry about is if the handle 
> was closed and memory reused (in which case we'd read a garbage). To 
> avoid that, we simply need to recheck the handle after reading the 
> memory. Also note that guarantees of user_section lock still hold. 
> Although this memory is modified on server side, it's always initiated 
> by a server call call from proper thread when SetWindowLong* holds the 
> lock.
> 
> 
> There are some similarities to Huw's and Rémi's shared memory patches 
> that are used in Proton, but this patchset touches different areas. 
> user-handles branch also doesn't need locking mechanisms because thread 
> safety is guaranteed by other means. We may need something like serial 
> number-based synchronization when we will need a way to read multiple 
> fields who's integrity cannot be guaranteed differently, but that 
> doesn't seem necessary for cases that I needed so far. Anyway, I believe 
> that those patches can be adopted to use mechanisms from my series, but 
> the overlap is small at this point.
> 
> 
> Any feedback is appreciated.
> 

Hi Jacek,


I had a quick look, although I didn't really look into detail. I really 
like the idea of having shared memory introduced, with a similar 
mechanism as for the Proton patches, it makes it possible to think about 
having them upstream too one day.

Then, I think I would rather use the same locking mechanism for both if 
possible. I like the way Huw did it with SHM read/write blocks, as it 
maps quite closely to the SERVER_START_REQ / SERVER_END_REQ pattern, 
making shared memory communication very visible and explicit (although 
it could maybe be even better by requiring a scope block, as for server 
requests).


I also think having atomic_store_ulong spread a bit everywhere is not 
making the code very readable. It's also not as efficient I believe, as 
potentially every store needs to lock, whereas the SHM block only 
requires it on the sequence counter atomic operations.

Also note that the atomic_store helpers name is now misleading, as Jinoh 
Kang noted in their patch the other day. They aren't atomic at all, only 
just strongly ordered regarding other volatile qualified variables.

On the same note, and it's also a potential issue with Proton SHM 
patches, I think we're using and relying on volatile a little bit too 
much and that may not be ideal. For instance, every shared memory store 
is done through a volatile qualified struct, and are all strongly 
ordered together, although the SHM block is already atomic.


Regarding the session shared memory, I'm a bit worried about the 
allocator. I think it'd be much simpler using fixed size blocks and a 
freelist, especially as it looks like we only need to store window 
shared data there, for the moment at least. If we need very differently 
sized blocks, then adding pools is also probably simpler.


Cheers,
-- 
Rémi Bernon <rbernon at codeweavers.com>



More information about the wine-devel mailing list