[NCLUG] Python v.s. Ruby, and PRNGs

Stephen Warren swarren at wwwdotorg.org
Wed Apr 9 19:02:16 MDT 2008


So, I wrote a quick script that basically does:

* Roll a pair of dice N times, and calculate Chi^2 against the expected
distribution.

* Repeat the above, keeping a record of each run's Chi^2 value, then
calculate some stats on the set of Chi^2 values.

It turns out that across 1000 iterations of 100000 dice-pair rolls, I
got the following stats for Chi^2:

Min:       1.04
Mean:      9.88
Max:      30.06
Std.Dev.:  4.49

>From memory, assuming a normal distribution of values, 67% of expected
results will be within 1 SD of the mean.

As such, any result within (at least) the range 5.4...14.4 is simply
expected.

(I guess I should also graph the set of Chi^2 values to ensure they look
like a normal distribution)

So, from memory last night's Ruby Chi^2 was like 10.x and Python's 14.x?
As such, this isn't enough to imply that there's any meaningful
difference between the PRNGs used by those two languages, based on a
sample size of around 1-3 runs.

As an aside, the large variation in Chi^2 is also probably expected,
since any (P)RNG is expected to sometimes produce some pretty far-out
results; see the random.org explanation for why their graphs that
indicate random number quality sometimes indicate it's quite bad!



More information about the NCLUG mailing list