[PATCH 1/7] qedit/tests: Add initial tests for bitmap grab mode with a custom filter.

Gabriel Ivăncescu gabrielopcode at gmail.com
Mon Oct 26 08:35:56 CDT 2020


On 23/10/2020 18:51, Zebediah Figura wrote:
> 
> 
> On 10/21/20 8:07 AM, Gabriel Ivăncescu wrote:
>> On 20/10/2020 22:56, Zebediah Figura wrote:
>>> On 10/20/20 12:05 PM, Gabriel Ivăncescu wrote:
>>>> Thanks for the review.
>>>>
>>>> On 20/10/2020 19:46, Zebediah Figura wrote:
>>>>> On 10/19/20 11:48 AM, Gabriel Ivăncescu wrote:
>>>>>> We fill the video pattern with something that varies between lines and
>>>>>> columns, to test it properly later (including the scaling algorithm).
>>>>>>
>>>>>> Signed-off-by: Gabriel Ivăncescu <gabrielopcode at gmail.com>
>>>>>> ---
>>>>>>     dlls/qedit/tests/mediadet.c | 337 +++++++++++++++++++++++++++++++++++-
>>>>>>     1 file changed, 333 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/dlls/qedit/tests/mediadet.c b/dlls/qedit/tests/mediadet.c
>>>>>> index dc83bb9..010b746 100644
>>>>>> --- a/dlls/qedit/tests/mediadet.c
>>>>>> +++ b/dlls/qedit/tests/mediadet.c
>>>>>> @@ -136,6 +136,11 @@ struct testfilter
>>>>>>         struct strmbase_filter filter;
>>>>>>         struct strmbase_source source;
>>>>>>         IMediaSeeking IMediaSeeking_iface;
>>>>>> +
>>>>>> +    BOOL bitmap_grab_mode;
>>>>>> +    const GUID *time_format;
>>>>>> +    LONGLONG cur_pos;
>>>>>> +    HANDLE thread;
>>>>>>     };
>>>>>>     
>>>>>>     static inline struct testfilter *impl_from_strmbase_filter(struct strmbase_filter *iface)
>>>>>> @@ -158,10 +163,103 @@ static void testfilter_destroy(struct strmbase_filter *iface)
>>>>>>         strmbase_filter_cleanup(&filter->filter);
>>>>>>     }
>>>>>>     
>>>>>> +static DWORD WINAPI testfilter_frame_thread(void *arg)
>>>>>> +{
>>>>>> +    REFERENCE_TIME start_time, end_time;
>>>>>> +    struct testfilter *filter = arg;
>>>>>> +    IMemAllocator *allocator;
>>>>>> +    IMediaSample *sample;
>>>>>> +    unsigned i;
>>>>>> +    HRESULT hr;
>>>>>> +    DWORD fill;
>>>>>> +    BYTE *data;
>>>>>> +
>>>>>> +    hr = IMemInputPin_GetAllocator(filter->source.pMemInputPin, &allocator);
>>>>>> +    ok(hr == S_OK, "Got hr %#x.\n", hr);
>>>>>> +
>>>>>> +    start_time = (filter->cur_pos == 0xdeadbeef) ? 0 : filter->cur_pos;
>>>>>> +    while (hr == S_OK)
>>>>>> +    {
>>>>>> +        hr = IMemAllocator_GetBuffer(allocator, &sample, NULL, NULL, 0);
>>>>>> +        if (hr == VFW_E_NOT_COMMITTED)
>>>>>> +        {
>>>>>> +            IMemAllocator_Commit(allocator);
>>>>>> +            hr = IMemAllocator_GetBuffer(allocator, &sample, NULL, NULL, 0);
>>>>>> +        }
>>>>>> +        ok(hr == S_OK, "Got hr %#x.\n", hr);
>>>>>> +
>>>>>> +        hr = IMediaSample_GetPointer(sample, &data);
>>>>>> +        ok(hr == S_OK, "Got hr %#x.\n", hr);
>>>>>> +
>>>>>> +        fill = (start_time / 10000 & 0xffffff) ^ 0xccaabb;
>>>>>> +        for (i = 0; i < 640 * 480 * 3; i += 3)
>>>>>> +        {
>>>>>> +            data[i] = fill ^ i;
>>>>>> +            data[i + 1] = fill >> 8 ^ i;
>>>>>> +            data[i + 2] = fill >> 16 ^ i;
>>>>>> +        }
>>>>>> +
>>>>>> +        hr = IMediaSample_SetActualDataLength(sample, 640 * 480 * 3);
>>>>>> +        ok(hr == S_OK, "Got hr %#x.\n", hr);
>>>>>> +
>>>>>> +        end_time = start_time + 400000;
>>>>>> +        hr = IMediaSample_SetTime(sample, &start_time, &end_time);
>>>>>> +        ok(hr == S_OK, "Got hr %#x.\n", hr);
>>>>>> +        start_time = end_time;
>>>>>> +
>>>>>> +        if (winetest_debug > 1) trace("%04x: Sending frame.\n", GetCurrentThreadId());
>>>>>> +        hr = IMemInputPin_Receive(filter->source.pMemInputPin, sample);
>>>>>> +        if (winetest_debug > 1) trace("%04x: Returned %#x.\n", GetCurrentThreadId(), hr);
>>>>>> +
>>>>>> +        IMediaSample_Release(sample);
>>>>>> +    }
>>>>>> +
>>>>>> +    IMemAllocator_Release(allocator);
>>>>>> +    return hr;
>>>>>> +}
>>>>>
>>>>> Why spawn a thread that loops like this, instead of sending individual
>>>>> frames? In particular, the latter obviates the need to manually flush
>>>>> the stream when seeking.
>>>>>
>>>>> I can see value in trying to emulate a realistic filter, but it seems
>>>>> easier to me to just render a test AVI file and let the media detector
>>>>> insert a built-in one. A separate test that exercises the functions as a
>>>>> program would actually use them seems quite welcome, in fact.
>>>>>
>>>>
>>>> Actually I uncovered several self-made bugs while implementing it (as I
>>>> was not very familiar with quartz) which were apparent mostly because of
>>>> implementing this like a realistic filter. So I think it's useful.
>>>>
>>>> For example this implicitly tests the condition that the media detector
>>>> stops the stream before seeking, else the sample grabber would still
>>>> hold the previous sample and have a race condition.
>>>
>>> That seems suspicious; I'd expect that a flush should be sufficient to
>>> achieve that.
>>>
>>
>> It's possible there's a bug in the Sample Grabber, because it doesn't
>> handle flushing at all, and I don't know if it should. But as I
>> mentioned in the other reply to the EnterBitmapGrabMode implementation,
>> this uncovers the bug and Windows does seem to put a Stop() when seeking
>> (if I trace +strmbase on the test).
> 
> If Windows does fully stop the filter, then that's fine, but that should
> also be tested if possible, probably by checking "filter->state" when
> seeking.
> 

Ok on more investigation Windows seems to pause the filter *before* the 
seek operation, for some reason. But that's not so important now because 
I think I found a bug in the sample grabber anyway...

>>
>>>>
>>>> Rendering to a test AVI file doesn't seem that ideal, unless you mean
>>>> rendering it within the test itself? But I thought, from last patches I
>>>> sent about qedit, that the focus was to use custom filters as much as
>>>> possible to be able to test them better, and AVI files were just needed
>>>> for put_Filename only.
>>>
>>> quartz is complicated, not just because it uses a complicated system of
>>> callbacks, but also because it has a *lot* of moving parts, most of
>>> which can be system components but also can be application components.
>>> Hence one of the things I try to do, when writing quartz tests, is to
>>> try to test every single moving piece, one at a time. I try to validate
>>> every *non-obvious* implementation detail that I reasonably can, mostly
>>> with the exception of exhaustively testing error handling (e.g. in
>>> general I don't think it's worth testing whether the code checks for
>>> S_OK or SUCCEEDED(), especially if a potential failure can be flagged by
>>> an ERR/WARN message anyway) where functions are "not supposed to" fail.
>>>
>>> When doing this kind of testing, I want to isolate the causes of a
>>> return value—sometimes if only for clarity—and often this means using
>>> our own custom filters instead of system ones. For example, it makes it
>>> clearer that the video renderer is the one responsible for holding up
>>> the graph during preroll if we don't have any *other* system filters in
>>> the graph at the time.
>>>
>>> These tests are focused mostly on proving the implementation correct,
>>> though of course they help to catch regressions as well. At the same
>>> time, using custom filters also often means testing real-world
>>> behaviour. It's not rare for applications to insert their own filters at
>>> any point in the pipeline (source, sink, transform, parser).
>>>
>>> On the other hand, there's also some value in testing very high-level
>>> usage of the quartz API. That's not just because we need to test
>>> high-level functions (like IGraphBuilder::RenderFile()) but also
>>> because, as you've discovered, it's easy to miss important details about
>>> how parts work together. These tests are more regression tests than
>>> conformance tests. Most of these are in rungraph() [in
>>> quartz:filtergraph] and its callers.
>>>
>>> Back to the more concrete case: Certainly I'm not trying to advocate
>>> that we remove any of the testfilter infrastructure this patch adds. I
>>> am suggesting that we instead send frames one at a time instead of using
>>> such a loop. Although frankly, after looking at the later patches in the
>>> series I'm not even necessarily sure that's better. [I will note,
>>> though, that it would be nice to test the frames returned from
>>> GetBitmapBits() a little closer to where the test loop is introduced, if
>>> not in the same patch, so that I have that context immediately.] It may
>>> also make sense to send just one frame instead of looping, just for a
>>> little extra simplicity.
>>>
>>
>> Thanks for the information :-)
>>
>> I think the loop is needed to catch regressions as noticed. I should
>> probably keep the loop in the first patch, but only send black pixels
>> all the time, until the GetBitmapData tests, where I will convert it to
>> what we have now with the varying pattern, so it matches the verification.
>>
>> Would that be acceptable as a compromise?
> 
> My point is largely that the kind of regressions you're talking about
> would probably have been detectable with the second, separate test I was
> recommending.
> 
> Note that one problem with keeping the tests as they are now is that
> there's not a good way to check what happens if you call GetBitmapBits()
> before a sample has reached the sample grabber. This is more than a
> little important, as it's quite possible that either the media detector
> or the sample grabber is supposed to wait.
> 

Well I rewrote the tests and I believe I found bugs with the sample 
grabber. It should definitely wait for a new sample when flushing, I 
think, but I'm investigating it.

Note that EnterBitmapGrabMode / GetBitmapBits will stall until a sample 
is first received. Originally, this was due to Pause only actually 
pausing after receiving a sample (which is correct behavior) since the 
+strmbase log on Windows was filled with GetState polling for it. But 
this doesn't work when flushing.



More information about the wine-devel mailing list