[PATCH] kernelbase: Fix relationship between Sleep() and GetTickCount()

Paul Gofman pgofman at codeweavers.com
Mon Aug 10 07:02:32 CDT 2020


On 8/10/20 14:55, Arkadiusz Hiler wrote:
> On Mon, Aug 10, 2020 at 02:45:49PM +0300, Paul Gofman wrote:
>> On 8/10/20 14:32, Paul Gofman wrote:
>>> On 8/10/20 13:55, Arkadiusz Hiler wrote:
>>>> Sleep() and GetTickCount() work on Windows in 15.6ms increments.
>>>>
>>>> Some programs (DOSBox) are depending on this behavior and are
>>>> assuming that
>>>> the return value of GetTickCount() will change during sleeping.
>>>>
>>>> Currently we are updating shared counters used by GetTickCount()
>>>> every 16ms +
>>>> on each server request, and our Sleep() implementation has
>>>> resolution of 1ms,
>>>> which causes DOSBox to hang.
>>>>
>>>> This patch changes Sleep() (and SleepEx()) to behave the same way as on
>>>> Windows and makes sure that GetTickCount() will be updated during
>>>> sleeping by
>>>> decreasing the update interval to 15ms (worst case, without any
>>>> server calls).
>>>>
>>>> This fixes Doom II from Steam.
>>> I also did notice that Sleep on Windows use to work with 15.6ms quantum
>>> some time ago. But the important part which is missed here is that Sleep
>>> behaviour is affected (at least) by timeBeginPeriod() / timeEndPeriod()
>>> winmm functions (as I recalled now after rerunning my old test program
>>> and wondering why I see 1ms sleep quantum). E. g., if I call
>>> timeBeingPeriod(1) before testing Sleep actual time and GetTickCount
>>> change, the Sleep starts to behave very similar to how it works now in
>>> Wine: sleeps with 1ms quantum and GetTickCount() is not necessarily
>>> updated on wake. I think we can't ignore that behaviour, I did see the
>>> games playing with winmm functions, Sleep etc.
>>>
>>> I was also initially going to say that maybe we would need to explicitly
>>> check if GetTickCount has changed during sleep, but the failing test on
>>> Win10 suggests that maybe Windows doesn't guarantee that also, just
>>> sleeps with 15.6ms quantum by default (unless changed by winmm).
>>>
>> PS If timeBeginPeriod() indeed sets the global time resolution for all
>> processes as the docs suggest, I can guess there should be a field in
>> KSHARED_USER_DATA which holds that resolution (there are a few related to
>> timers, I did not ever test what exactly do they hold), and we should
>> probably use those to store and retrieve timer resolution.
> Indeed timeBeginPeriod() seems to affect the resolution of Sleep()
>
> https://hiler.eu/p/0701eab8d523.txt
> https://testbot.winehq.org/JobDetails.pl?Key=76759&f109=exe64.report#k109
>
> This is completely unexpected... Thanks for pointing this out!
>
> I guess I have to investigate whether it's truly global and if there is
> a field for that in KSHARED_USER_DATA.
>
> That would be a bit funny - anything that depends on the default
> behavior will be broken while that process is running, and if it
> crashes/forgets to call timeEndPeriod() upon exit we are left in that
> broken state forever :-)
>
Please mind also NtSetTimerResolution() / NtQueryTimerResolution() 
functions (which is currently a stub in Wine). This all needs testing, 
but it would looks natural if winmm would actually call that one. All 
that probably means that it is NtDelayExecution() which needs to be 
fixed to favour current timer resolution (besides a load of functions 
related to various timers; but maybe those can be treated separately 
from these changes?). Unfortunately that makes things more complicated 
because NtDelayExecution is widely used by Wine itself and those places 
might rely on a (relatively) precise delays not to introduce great 
performance loss, so maybe those usages needs to be checked before 
changing that.




More information about the wine-devel mailing list