[PATCH 4/4] dlls/ntdll: Call try_recv inside sock_recv only if it is safe to do so.

Jinoh Kang jinoh.kang.kr at gmail.com
Tue Jan 18 20:30:33 CST 2022


On 1/19/22 08:08, Zebediah Figura (she/her) wrote:
> On 1/18/22 13:30, Jinoh Kang wrote:
>> Otherwise, try_recv() call from sock_recv() may race against try_recv()
>> call from async_recv_proc(), shuffling the packet order.
>>
>> Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=52401
>> Signed-off-by: Jinoh Kang <jinoh.kang.kr at gmail.com>
>> ---
>>   dlls/ntdll/unix/socket.c | 38 ++++++++++++++++++-------------------
>>   dlls/ws2_32/tests/sock.c |  8 ++++----
>>   server/protocol.def      |  4 ++--
>>   server/sock.c            | 41 ++++++++++++++++++++++++++++------------
>>   4 files changed, 54 insertions(+), 37 deletions(-)
> 
> Despite the attempts to split 2/4 and 3/4 out of this patch, it's still doing two things at once. Namely, the optimization whereby recv_socket returns STATUS_ALERTED should not be part of this patch.

I just realised that I can still keep try_recv() before recv_socket even after
removing the status field from struct recv_socket_request.  I'll have to admit
that this didn't come as obvious to me the first time.

I'll split the patch accordingly.  Thanks for the remark!

> 
> As I discussed in [1] it's not really clear to me that three server calls is significantly worse than two.

I didn't really see the need for a new server request, when we could reuse an
already existing one.  Also, it doesn't harm readability that much either.

> It'd be nice to know that just removing the initial try_recv() call is really going to make programs worse if we're going to pursue it.

Actually I have observed that simply removing try_recv() altogether from
sock_recv() suffers from another set of problems (that make the test fail):

1. It makes recv() on nonblocking socket always fail with WSAEWOULDBLOCK.
2. It turns all synchronous failures into asynchronous failures.  Normally,
   a synchronous failure shall not fire up the APC or post to the completion
   port at all.

Thus, a call to try_recv() still has to be made before the async request enters
pending mode, regardless of whether this call is done before receive_sock or
not.

My previous approach was to do this "initial" try_recv() call via APC_ASYNC_IO
(i.e. inside async_recv_proc), thereby making the call just like any other
asynchronous try_recv() call.  Eventually I could get to the point where all the
unit tests pass, but the end result looked a bit messy (especially with the
special case handling inside invoke_system_apc, and extra arrangements that are
needed inside receive_sock).

That said, I do need to write more test cases for synchronous vs. asynchronous
failure, specifically for STATUS_BUFFER_OVERFLOW.

> I.e. personally I'd rather just leave it alone until we find a program that gets worse.
> 
> That aside, though, I'm not sure that adding a new select type is really the right way to go about this. One approach that occurs to me, which might end up being simpler, would be to return an apc_call_t, essentially as if the select request had been called immediately afterward. (In that case perhaps we should return STATUS_KERNEL_APC rather than STATUS_ALERTED).

This is exactly the approach I have initially had in mind, and is described in
https://bugs.winehq.org/show_bug.cgi?id=52401#c4.  Pasting it here:

> 1. The client issues a 'recv_socket' call.
> 2. The server determines if read_q is empty *and* the request can be served
>    immediately.
> 3. If this is the case, the server does the following:
>    1. The server marks the current thread as being in system APC wait.
>    2. The server bypasses sock->read_q and calls async_terminate() directly.
>       (Alternatively, the server first puts the async in read_q and later
>        calls async_wake_up( &sock->read_q ).)
>    3. The server dequeues the next system APC (likely APC_ASYNC_IO from 3.2)
>       from the queue.
>    4. The server inserts the APC into the 'recv_socket' reply.
> 
> Ideally we could make it so that every Wineserver call has a flag that indicates
> whether it could process the original request *and* retrieve the next system APC
> from the queue in one go.  This will eliminate the extra SIGUSR1 round trip in
> other cases as well.

(Note: 3.2 is slightly wrong, we should always go through sock->read_q no matter
what.)

As you can see this inevitably winds up tweaking the select mechanism
anyway. This approach has some open questions:

1. How shell we return the results for our new pseudo-APC call?
   Using prev_apc in the select request, or a separate server call?

2. Shall our pseudo-APC go through the system_apc queue?
   If so, what if there is another kernel APC that is already queued?

3. Shall we reuse invoke_system_apc() for this apc_call_t response?

4. Shall we reuse queue_apc() to generate the apc_call_t?
   If so, what shall be done with the gratuitous SIGUSR1 delivery?
   Enter uninterrupted region or explicitly block SIGUSR1 while
   calling receive_sock?

Also, as I've discussed earlier, the initial synchronous try_recv() call has
subtly different semantics with respect to error handling compared to
subsequent asynchronous counterparts, which does make existing code more
complicated.

> 
> [1] https://www.winehq.org/pipermail/wine-devel/2021-December/202825.html


-- 
Sincerely,
Jinoh Kang



More information about the wine-devel mailing list