wined3d: Add Nvidia 8800GTX detection

Roderick Colenbrander thunderbird2k at gmail.com
Tue Jul 20 22:08:19 CDT 2010


On Wed, Jul 21, 2010 at 3:59 AM, Roderick Colenbrander
<thunderbird2k at gmail.com> wrote:
> On Tue, Jul 20, 2010 at 11:54 PM, Luke Bratch <l_bratch at yahoo.co.uk> wrote:
>> --- On Tue, 20/7/10, Henri Verbeet <hverbeet at gmail.com> wrote:
>>
>>> Negative on using experimental extensions I'm afraid.
>>> Writing a proof
>>> of concept patch is ok, of course, and I'd be willing to
>>> review that,
>>> but it can't go in until the extension is finalized. Note
>>> that this
>>> wouldn't allow you to get rid of the current code though,
>>> you still
>>> need a fallback. It would just allow for more accurate
>>> reporting on
>>> cards that do support that extension.
>>
>> The attached uses that experimental extension [1] to report the correct total memory for cards that support it, while sticking with the existing hard coded values and code layout for those that don't.
>>
>> I would appreciate any comments.
>>
>> Thanks
>>
>> Luke Bratch
>>
>> [1] http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt
>
> Actually I wrote a GL_NVX_gpu_memory_info patch as well, didn't have
> time to send the mail to here yet. The patch I wrote is a bit
> different and I think that's how it should work. You have to keep the
> current code as it is. After the gpu / amount of video memory have
> been selected AND if no gpu memory override is set in the registry
> (there can be good reasons for not exposing all video memory) then if
> the nvidia extension is around, the amount of video memory should be
> overridden. Let me dig up the stuff I made.
>
> Roderick
>

I just had a look at the old stuff I made but it was quite hacky and
not what I remembered from it (thought it was a bit nicer).

After thinking about it a bit, I think it is not an easy 'few lines'
patch but it requires more changes. First the issues and what I don't
like about adding it to the current mechanism. You could reason that
we should query the amount of video memory from within each
'guess_card' function. The problem right now is that each function
'returns' when the right card has been found but we would like to do
the override after the card has been found but before we return from
the call. The code could be changed to do this but I think I find it
nicer to make some bigger changes to the code.

I think 'select_card_*' should just return the pci device id and the
default amount of video memory should be stored in lets say the driver
version table (a different name for the table might make sense). It
would also avoid us from duplicating some of the logic for open source
nvidia/ati drivers (if they ever become useful for 3D gaming apart
from Quake3). I think I was against such a change a few months ago
when some of the code was restructured. If I was against it was likely
because it was harder to add comments which can be useful for memory
info.

Once all video memory is in the table, we could grab the default
amount of video memory from this table from 'wined3d_guess_card'. In
order to override the amount of video memory we could add an
(optional) 'select_nvidia_gpu_memory' to 'vendor_card_select_table' to
override the default amount of video memory. It would be easy to add
other GL_*_memory_info extensions as selectors.

Before the memory info can be moved to the table, Intel driver
information has to be added to there (there is none now but we have
memory info in the selector functions). Then all the memory info can
be moved.

I'm a little tired now, so might have missed something. I will likely
play with this since I have a need for it as well. It is not so hard,
but Henri I promise that the next WineD3D thing I hack on again is the
blit_shader... Hopefully I have time for that by the end of this week.

Roderick



More information about the wine-devel mailing list