For sake of simplicity, since this whole exercise is a byproduct, the mapping calculations are done in doubles. To get even distribution of values and a good randomisation, it is thus necessary to break down the size_t hash value in a first step (size_t can be 64bit and random numbers would be subject to rounding errors otherwise) The choice of this quantiser is tricky; it must be a power of two to guarantee even distribution, and if chosen to close to the grid of the result values, with lower probabilities we'd fail to cover some of the possible result values. If chosen to large, then of course we'd run danger of producing correlated numbers on consecutive picks. Attempting to use 4 bits of headroom above the log-2 of the required value range. For example, 10-step values would use a quantiser of 128, which looks like a good compromise. The following tests will show how good this choice holds up. |
||
|---|---|---|
| .. | ||
| draw | ||
| DIR_INFO | ||
| empty.html | ||
| InterfaceConcept_Varga.mm | ||
| renderengine.html | ||
| thinkPad.ichthyo.mm | ||
| uml | ||
| workflow.mm | ||