The collisions affecting complexity is an interesting point.
But how big is the effect in reality?
If your hashes are 32 bits and have a good distribution, wouldn't you need somewhere in the order of 2^32 elements to get a lot of collisions and noticeably affect time complexity? Does it really already happen at 2% of 2^32 as used?
Meanwhile many array implementations are indexed with 32 bit integers, so they have a hard limit of 2^32 items. Easy to solve, but suggests the limit is rarely reached.
So while theoretically interesting, wouldn't it be safe in practise to consider O(1) complexity for 99.99% of cases?
Enrico Testori
Multimedia software developer. I turn food and coffee into bugs!
As far as i remember from university, it also depends on what you mean by 'finding' in your scenario.
If you need to retrieve a value by key lookup hashmaps are perfect, if you want to find within a range then there are other data structures like B+ trees.
Interesting comparison anyway! Seeing big-O notation made me feel at school again ;-)