Wine GPU decoding
stefandoesinger at gmail.com
Mon Mar 31 17:29:00 CDT 2014
If this mail only contains this line my android client screwed up and I'll
resend it tomorrow.
Am 31.03.2014 21:16 schrieb "Michael Müller" <michael at fds-team.de>:
> This is actually the windows version of VLC playing a MPEG2 movie with
> GPU acceleration using DXVA2. My implementation of dxva2 uses the VAAPI
> on Linux to do the actual gpu decoding and should support AMD, Intel and
> NVIDIA cards.
> The most difficult part is that DXVA2 is completely based on
> Direct3D9Device and Direct3DSurface9. The DXVA2 places the output images
> into a Surface and the applications locks the surface to get the output
> data or simply presents it to the screen.
I did some introductory interface reading. If I understand it correctly,
the dxva implementation / driver can control the pool of the input surface.
Not only that, it actually creates the surface. Is that correct?
Afaics the output surface is either a dxva-created surface or a render
target, is that correct?
> Currently i lock both kind of buffers after rendering a frame and do the
> synchronization in system memory, which is kind of inefficient depending
> on the surface type.
If you are in system memory, is there an issue with using the d3d surface's
memory as the vaapi input buffer? Also take note of user pointer surfaces /
textures in d3d9ex.
> My original idea was to do the copy in the graphic
> card as I can copy the image to a texture after decoding, but after
> Sebastian implemented this part we found out that the VAAPI implies a
> format conversion to RGB when copying data to a texture.
I do not know of any windows driver that supports YUV render targets (see
above). Are dxva-created output surfaces video memory surfaces (or
textures) or system memory surfaces? If they are sysmem surfaces you don't
have a problem - the app either has to read back to sysmem or put up with
an RGB surface / texture.
But even if you're copying to an RGB surface you have to get the GL texture
from the IDirect3DSurface9 somehow. There may not even be one, if the
surface is just the GL backbuffer. This is just a wine-internal problem
though and should be solvable one way or another.
The vaapi-glx interface is also missing options for the mipmap level and
cube map face. I guess you can ignore that until you find an application
that wants a video decoded to the negative z face, mipmap level 2, of a
rendertarget-capable d3d cube texture.
You may also want a way to make wined3d activate the device's WGL context.
Right now that's not much of an issue if your code is called from the
thread that created the device. The command stream will make this more
> Anyway, if other applications continue to copy the data back to system
> memory it might be better to instead wrap the VAAPI buffers as Direct3D9
> surfaces so that we can directly map the VAAPI buffers when LockRect()
> is called instead of copying the data. Though this would imply problems
> when the applications tries to pass this interface to Present().
d3ddevice::present does not accept surfaces, but the problem remains for
If the vaapi buffer has a constant address you can create a user memory d3d
surface. I wouldn't be surprised if dxva was a motivation for user memory
On a related note, we don't want any GLX code in wined3d, and probably not
in any dxva.dll. The vaapi-glx.h header seems simple enough to use through
WGL as it just says a context needs to be active. If not, you'll have to
export a WGL version of vaapi from winex11.drv.
At some point we should think about equivalent interfaces on OSX and how to
abstract between that and vaapi, but not today.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the wine-devel