dbghelp performance

Markus Amsler markus.amsler at oribi.org
Sun May 6 11:12:20 CDT 2007


Eric Pouech schrieb:
> Markus Amsler a écrit :
>> No, performance is exactly the same as pool_heap :( .
> even for memory consumption ???
Yes, it looks like HeapCreate has a default size of 64k.
>> I analyzed why your original insert_first version was slower and 
>> memory hungrier then pool_heap. It turned out pool_realloc is the 
>> problem, not pool_alloc. First there's a memory leak, if the memory 
>> is moved the old one is not freed. Second pool_realloc is O(n) that's 
>> the reason for the speed hits. Directly using heap functions for 
>> reallocs solves both problems (but looks to hackish to get commited, 
>> perhaps you have a better idea).
> we could try not to realloc the array of arrays but rather use a tree 
> of arrays which should solve most of the issues, but that would make 
> the code complicated
> another way is to double the size of the bucket each time we need to 
> increase size (instead of adding one bucket)
I'll have a look at duplicating bucket size.
>> Here the results for pool_realloc on top of insert_first
>> pool_realloc            4.5s        54M
>> pool_realloc,r300    17s        104M    The next problem is 
>> vector_iter_[up|down], because vector_position is O(n). Explicitly 
>> storing the current iter position speeds r300 up to 8s (from original 
>> 115s)! But I'm not sure how to implement it cleanly. Directly use 
>> for() instead of vector_iter_*(), use an iterator, ...
> likely use an interator which keeps track of current position (as we 
> do for the hash tables)
Iterator for an vector looks a bit like an overkill, I was in favor of  
for(i=0; i<vector_length(); i++). Either solution will add some code on 
the caller side.

Markus



More information about the wine-devel mailing list