[PATCH 4/5] server: Added server side mamed pipe read and write implementation and use it for message mode pipes.

Sebastian Lackner sebastian at fds-team.de
Tue Feb 28 09:44:21 CST 2017


On 28.02.2017 15:14, Jacek Caban wrote:
> On 28.02.2017 14:40, Sebastian Lackner wrote:
>> On 28.02.2017 14:03, Jacek Caban wrote:
>>> As long as we transfer result over server socket, we don't have control
>>> in server over when iosb and result buffer are filled, so we need a
>>> signal from client when we can signal async completion (signal
>>> event/object, queue completion). I sent a patch that was more inline
>>> with how we do that for calls where I/O is implemented on client side
>>> [1], but Alexandre preferred that to be in server [2]. Having that in
>>> server is not only cleaner, but also gives us more control, like in
>>> which case to fill iosb or signal event. It will also make it possible
>>> to implement other solution (bellow).
>> I assume Alexandre was mainly referring to the event / APC stuff. I assume
>> he also would be fine with a code path to return the result immediately
>> when available. This could for example be done by marking the async object
>> as "in progress" to avoid queing unnecessary APCs. If a real result is
>> available at the end of the call, it could be transferred right away. This
>> would also help to fix the problem with "immediate returns" below.
> What about the problem of client having to signal server when it
> received and processed the data? When would you set the event in that
> case? We can transfer data immediately, but we still need to know when
> we can set event / queue APC. Currently we know that we can do that when
> APC_ASYNC_IO returns.

Well, there are also situations where it would be relatively easy to avoid
the roundtrip. We could return immediately at least when no events, APCs or
completions are involved, right? Since this is the most common situation it
might still be worth the effort.

> 
>>>> Also please note that we need to distinguish "immediate returns" and
>>>> "async returns" to properly implement FileIoCompletionNotificationInformation.
>>>> This especially means that waiting internally is not an option.
>>> It's easy to distinguish and still have internal waits. I can't point
>>> the right solution without looking deeper at
>>> FileIoCompletionNotificationInformation, but just as an example, you
>>> could do:
>>>
>>> if (iosb->status != STATUS_PENDING) mark_async_as_immediate_result();
>>>
>>> inside request handlers just after calling fd ops. Then you'd have all
>>> the info you need inside async object. Am I missing something?
>> With the currently proposed solution this would not work. The fd ops
>> already call async_terminate -> async_set_result -> add_async_completion ->
>> add_completion, but depending on the completion flags this should not be
>> done if it is an immediate result. There is no way to know if its supposed
>> to be an "immediate return" inside of the async_*() helpers, and checking
>> afterwards is too late (unless we introduce a flag to block async objects
>> from doing actions right away). Did you have a different idea how it should
>> be implemented?
> In cases covered by my patch, async_terminate doesn't call
> async_set_result, but queues APC instead, so that's not a problem.
> 
> More generally speaking, if we had a code path like you described, the
> solution I proposed would still work, just reverse the logic. Mark all
> created asyncs as immediate returns, and change it to async where
> appropriate, something like:
> 
> if (iosb->status == STATUS_PENDING) mark_async_as_async_result();
> 
> in read/write/ioctl/flush should do. register_async request could do that unconditionally.

For flush we are currently not using any callback, so marking it in the handler
would be too late. Thats also something we could fix of course - Nevertheless,
as you can see it already gets pretty complicated. ;) If its possible to
simplify this somehow, that would certainly be an improvement.

Best regards,
Sebastian




More information about the wine-devel mailing list