Something came flashing in my mind while discussing benchmarking issues at
a client. All benchmarks have to run on big volumes because of the CPU's
(x86) inability in terms of time precision to go higher than 1/16th of a
second. So here we are fighting for hundreds, sometimes thousands of a
second for a hotlap and all that for a 'fake' time??? Is there something I
am missing, some kind of trick used by the developers or is it all random
past the 1/16 limit?
Anybody could maybe enlight me on this?
**********************************************************
* Jean-Francois De Rudder Tel : +27 21 4196005 *
* Sybase SA Fax : +27 21 4196009 *
**********************************************************