[PATCH v2 4/5] winegstreamer: Implement IMFMediaSource::Start.

Derek Lesho dlesho at codeweavers.com
Wed Oct 7 09:58:43 CDT 2020


On 10/6/20 9:58 PM, Zebediah Figura wrote:
> On 10/6/20 10:59 AM, Derek Lesho wrote:
>> +        if (position->vt != VT_EMPTY)
>> +        {
>> +            GstEvent *seek_event = gst_event_new_seek(1.0, GST_FORMAT_TIME, GST_SEEK_FLAG_FLUSH,
>> +                    GST_SEEK_TYPE_SET, position->u.hVal.QuadPart / 100, GST_SEEK_TYPE_NONE, 0);
>> +            GstSample *preroll;
>> +
>> +            gst_pad_push_event(stream->my_sink, seek_event);
>> +
>> +            /* this works because the pre-seek preroll is already removed by media_source_constructor */
>> +            g_signal_emit_by_name(stream->appsink, "pull-preroll", &preroll);
>> +            if (preroll)
>> +                gst_sample_unref(preroll);
> What is the point of doing this?
To wait for the flush to complete.  What is the point of waiting for the 
flush to complete?  To ensure we don't send any MEMediaSample events 
from before the seek after MESourceSeeked.
>> +
>> +            IMFMediaEventQueue_QueueEventParamVar(stream->event_queue,
>> +                seek_message ? MEStreamSeeked : MEStreamStarted, &GUID_NULL, S_OK, position);
>> +        }
>> +    }
>> +
>> +    IMFMediaEventQueue_QueueEventParamVar(source->event_queue,
>> +        seek_message ? MESourceSeeked : MESourceStarted,
>> +        &GUID_NULL, S_OK, position);
>> +
>> +    source->state = SOURCE_RUNNING;
>> +
>> +    gst_element_set_state(source->container, GST_STATE_PLAYING);
>> +    gst_element_get_state(source->container, NULL, NULL, -1);
> And if this method is asynchronous, do you need this
> gst_element_get_state() call?
Probably not, but not because the method is asynchronous, but rather 
because it shouldn't affect how we interact with the appsink/s.
>
>> @@ -432,6 +717,8 @@ static HRESULT media_stream_connect_to_sink(struct media_stream *stream)
>>       if (gst_pad_link(stream->their_src, stream->my_sink) != GST_PAD_LINK_OK)
>>           return E_FAIL;
>>   
>> +    g_object_set(stream->appsink, "caps", source_caps, NULL);
>> +
> Again, what's the point of this?
To see whether the caps have changed since the prior selection in 
start_pipeline.  If they have changed, we must send a reconfigure event.
>
>> +    if (SUCCEEDED(hr = source_create_async_op(SOURCE_ASYNC_START, &command)))
>> +    {
>> +        command->u.start.descriptor = descriptor;
>> +        command->u.start.format = *time_format;
>> +        PropVariantCopy(&command->u.start.position, position);
>> +
>> +        hr = MFPutWorkItem(source->async_commands_queue, &source->async_commands_callback, &command->IUnknown_iface);
>> +    }
> There are likely mfplat pecularities I'm not aware of here, but does
> this need to be asynchronous? I know that the documentation says "this
> method is asynchronous", but it's not immediately obvious to me that
> there isn't already a level of asynchronicity between this call and any
> of its effects.

Well, I think that making the media source calls synchronous in their 
own thread simplifies things since you don't have to worry about 
blocking or synchronization with other calls.  For example, 
::RequestSample, if synchronous, would have to have two paths, one for 
popping signals from the appsink and one for issuing the request into 
some dedicated request queue.  This was actually the implementation I 
had at first, but then I implemented the async reader callback with the 
same system, and Nikolay instead insisted that I use a command queue, 
which reduces complexity a lot.  Then when adding seeking support, I 
switched to this approach for the media source.  If nothing else, this 
system matches other media foundation components better 
(IMFMediaSession, IMFSourceReader), which reduces overhead for the reader.

>
>> +
>> +    return hr;
>>   }
>>   
>>   static HRESULT WINAPI media_source_Stop(IMFMediaSource *iface)
>> @@ -772,6 +1073,9 @@ static HRESULT WINAPI media_source_Shutdown(IMFMediaSource *iface)
>>       if (source->no_more_pads_event)
>>           CloseHandle(source->no_more_pads_event);
>>   
>> +    if (source->async_commands_queue)
>> +        MFUnlockWorkQueue(source->async_commands_queue);
>> +
>>       return S_OK;
>>   }
>>   
>> @@ -852,6 +1156,7 @@ static HRESULT media_source_constructor(IMFByteStream *bytestream, struct media_
>>           return E_OUTOFMEMORY;
>>   
>>       object->IMFMediaSource_iface.lpVtbl = &IMFMediaSource_vtbl;
>> +    object->async_commands_callback.lpVtbl = &source_async_commands_callback_vtbl;
>>       object->ref = 1;
>>       object->byte_stream = bytestream;
>>       IMFByteStream_AddRef(bytestream);
>> @@ -860,6 +1165,9 @@ static HRESULT media_source_constructor(IMFByteStream *bytestream, struct media_
>>       if (FAILED(hr = MFCreateEventQueue(&object->event_queue)))
>>           goto fail;
>>   
>> +    if (FAILED(hr = MFAllocateWorkQueue(&object->async_commands_queue)))
>> +        goto fail;
>> +
>>       object->container = gst_bin_new(NULL);
>>       object->bus = gst_bus_new();
>>       gst_bus_set_sync_handler(object->bus, mf_src_bus_watch_wrapper, object, NULL);
>> diff --git a/dlls/winegstreamer/mfplat.c b/dlls/winegstreamer/mfplat.c
>> index 2e8b0978648..9aa17ad00ab 100644
>> --- a/dlls/winegstreamer/mfplat.c
>> +++ b/dlls/winegstreamer/mfplat.c
>> @@ -601,3 +601,152 @@ IMFMediaType *mf_media_type_from_caps(const GstCaps *caps)
>>   
>>       return media_type;
>>   }
>> +
>> +GstCaps *caps_from_mf_media_type(IMFMediaType *type)
>> +{
>> +    GUID major_type;
>> +    GUID subtype;
>> +    GstCaps *output = NULL;
>> +
>> +    if (FAILED(IMFMediaType_GetMajorType(type, &major_type)))
>> +        return NULL;
>> +    if (FAILED(IMFMediaType_GetGUID(type, &MF_MT_SUBTYPE, &subtype)))
>> +        return NULL;
>> +
>> +    if (IsEqualGUID(&major_type, &MFMediaType_Video))
>> +    {
>> +        UINT64 frame_rate = 0, frame_size = 0;
>> +        DWORD width, height, framerate_num, framerate_den;
>> +        UINT32 unused;
>> +
>> +        if (FAILED(IMFMediaType_GetUINT64(type, &MF_MT_FRAME_SIZE, &frame_size)))
>> +            return NULL;
>> +        width = frame_size >> 32;
>> +        height = frame_size;
>> +        if (FAILED(IMFMediaType_GetUINT64(type, &MF_MT_FRAME_RATE, &frame_rate)))
>> +        {
>> +            frame_rate = TRUE;
>> +            framerate_num = 0;
>> +            framerate_den = 1;
>> +        }
>> +        else
>> +        {
>> +            framerate_num = frame_rate >> 32;
>> +            framerate_den = frame_rate;
>> +        }
>> +
>> +        /* Check if type is uncompressed */
>> +        if (SUCCEEDED(MFCalculateImageSize(&subtype, 100, 100, &unused)))
> Early return could save a level of indentation.
Yes but in future patches there will be else statements for handling 
compressed types.  Even if we don't end up using this for the media 
source, it will be necessary for games which manually use decoder MFTs.
>
> I also feel like there's an easier way to do this check. Maybe something
> like:
>
> for (i = 0; i < ARRAY_SIZE(uncompressed_video_formats); i++)
> {
>      ...
> }
>
> if (format == GST_VIDEO_FORMAT_UNKNOWN
>          && !memcmp(&subtype, &MFVideoFormat_Base.Data2, ...))
>      format = gst_video_format_from_fourcc(subtype.Data1);
>
> if (format == GST_VIDEO_FORMAT_UNKNOWN)
> {
>      FIXME("Unrecognized subtype %s.\n", debugstr_guid(&subtype));
>      return NULL;
> }
That's certainly a possibility, but using MFCalculateImageSize is 
probably the best way to concisely determine whether we want to take the 
uncompressed video format route.  With your solution, I guess we'd just 
add the handling for those formats in `if (format == 
GST_VIDEO_FORMAT_UNKNOWN)`.  Either way is fine, which path should I take?
>> +            FIXME("Unrecognized subtype %s\n", debugstr_guid(&subtype));
>> +            return NULL;
>> +        }
>> +
>> +
>> +        if (frame_size)
>> +        {
>> +            gst_caps_set_simple(output, "width", G_TYPE_INT, width, NULL);
>> +            gst_caps_set_simple(output, "height", G_TYPE_INT, height, NULL);
>> +        }
>> +        if (frame_rate)
> Why do you need to check this is nonzero? Can the frame rate really be 0/0?
To be honest, I don't remember exactly which circumstance led to this 
being necessary.  I assume it was something with the IMFMediaType MK11 
generates, but it does seem very strange so I'll have to check again.
>
> For that matter, do we need to give GStreamer the framerate at all? It
> should only ever matter to sink elements (or those that respect a clock).
Why risk seeing what happens when you don't translate this cap, are 
there any benefits?  Maybe a decoder would need it need it to handle 
some time-based seek?



More information about the wine-devel mailing list