[Bug 28723] Sound stutter in Rage when emulated windows version is set to "Windows 7" (XAudio2 -> mmdevapi sound output path)

wine-bugs at winehq.org wine-bugs at winehq.org
Sun Dec 4 06:01:40 CST 2011


http://bugs.winehq.org/show_bug.cgi?id=28723

--- Comment #70 from Alexey Loukianov <mooroon2 at mail.ru> 2011-12-04 06:01:40 CST ---
(In reply to comment #69)
> 2. trailing samples not heard, comment #63;
Ah, thx for reminding about it - I really should do this test on native today.
It's a pity I forgot to do it while been spending time looking at Vista
installer prompts.

> >Taking into account silent frames [...] is pretty simple.
> With either lead-in or trailing only silence, not in the 100Hz scenario.

Correct me if I'm wrong but this 100Hz scenario is:
a) An extreme usage variant having no real-world use. I can't imagine anyone
would use it unless conducting tests like we do.
b) It is essentially a constant xrun case, which may be perfectly handled by
add silence at start case in case alsa period is less or equal to mmdev period.
I.e. at the moment of timer callback we already have an underrun (or would have
it in less than 1ms - due to at the previous timer callback we only pumped out
one mmdev period of data - 5ms of silence + 5ms of real data - which is equal
in duration to the timer callback period). Since last event we have 5ms of data
held in mmdev buffer (event is fired after feeding alsa and handling
underruns). Thus proposed xrun handling logic prepends - in this case - 5ms of
silence (note - it should use max[alsa_period, mmdev_peiod] as "period length"
when calculating the amount of silent frames it should add) to the 5ms
available data and pumps out this to ALSA. Then we signal the event and at the
time next callback will be invoked we would end up at just the same situation
xrun + 5ms of data held in mmdevdrv buffer. Devpos readings would also look
like they are on native for this case.

> >How the things actually are can only be determined by reverse-engineering
> How can you conclude this? Look at how much we've found out simply by printing
> GetPosition & GetCurrentPadding and writing tests. 

I'm also a big fan on "black box games", just don't like emacs (due to lisp)
and prefer to toy with other puzzles like: "here is a binary which takes
something at input and produces something as output; investigate it any way you
want except for disassembling and try to produce the code that does the same
thing; the less time it takes to accomplish the task and the less complicated a
resulting code is - the better". My experience with solving such puzzles taught
me that sometimes there are several different ways to achieve the same observed
behavior - and what's pretty funny, in such cases the original algo used in the
"black box" doe the thing in yet another third way most of the times :-). It
seems for the that the "devpos jump at underrun" is just like one of those
cases. We see it, we know it's there - but we don't know what is the underlying
logic behind it and may only make guesses. I can't devise a test that would
allow us to distinguish the "bump at software underrun case" which is your
interpretation from the "bump after the last period had been written to hw". It
doesn't mean that there's no such test, it just mean that I'm too dumb to
invent it :-). Em, and a bit lazy too to head on and attach my laptop to the
oscilloscope to check what is actually emitted from hardware at the end of
stream playback :-).

> Back to bugs, one can derive the requirement ...
> ... The device could even adapt dynamically and reduce the period from e.g.
> 100ms downto 10ms should an underrun occur. Daydreaming?

Yes as yes for maximum alsa period requirements vs. rage vs. artificially
increased latency. Actually I had come up with just the same conclusions
yesterday after spending some time thinking about this problem. 

As for dynamically adapting period size - don't think it's a daydreaming as
I've had just the same idea and was going to propose it in "implementation
design details" text I've been going to compose at the beginning of the next
week. The only difference is that I've been thinking about fixing ALSA period
at some sane small value (like 10ms), requesting pretty big ALSA buffer (say,
ask for 1s and use as much as we would be presented with) and dynamically
changing duration between sequential timer callback basing on the padding value
an app seems to maintain. I.e. if an app seems to keep padding as low as
10-20ms - we have to run timer callbacks at "full speed" (i.e. every 10ms).
OTOH, if an app tends to store 300ms of audiodata in buffer - it most likely
would be OK to have mmdevperiod be something like 150ms. Algo may be something
like: evaluate the duration we have until xrun basing on the amount data we
have buffered currently, schedule next callback to come in a time equal to the
half of this duration, but non earlier than 10ms from now. That's a preliminary
thought actually as I hadn't done any test yet that would prove if this
approach is viable or not.

-- 
Configure bugmail: http://bugs.winehq.org/userprefs.cgi?tab=email
Do not reply to this email, post in Bugzilla using the
above URL to reply.
------- You are receiving this mail because: -------
You are watching all bug changes.



More information about the wine-bugs mailing list