For these large computations, I actually compute 50 extra digits to avoid roundoff error if I do some calculations with the resulting output.
Click here to see the p-value tables.
Normal numbers are those whose expansion in all bases has a uniform distribution of each digit. Let \(b\) be a base. Then let \(X\) be a random variable for counting a digit. \(X\) has a \(1/b\) probability of being \(1\) and is \(0\) otherwise. This makes its mean \(\mu=1/b\) and standard deviation \[\sigma=\sqrt{\left({1\over b}\right)^2\cdot{b-1\over b} +\left({b-1\over b}\right)^2\cdot{1\over b}} =\sqrt{{b-1\over b^3}+{(b-1)^2\over b^3}} =\sqrt{b^2-b\over b^3}={\sqrt{b-1}\over b}\] This distribution describes a single digit and adding \(N\) of these random variables describes how many occurrences of any digit we would expect to see when computing \(N\) digits. The \(p\)-value can be computed by first finding the test statistic, where \(M\) is the number of occurrences and \(\bar{x}=M/N\) is the sample mean. \[t={\bar{x}-\mu\over\sigma/\sqrt{N}}={M-N/b\over\sigma\sqrt{N}}\] Then the \(p\)-value is (for the two-tailed test) \[p=1-\text{erf}\left({|t|\over\sqrt{2}}\right) =\text{erfc}\left({|t|\over\sqrt{2}}\right)\] which describes the probability that we would see a value of \(\bar{x}\) at least as extreme if \(X\) really has the mean \(\mu\) and standard deviation \(\sigma\). In most practical uses of statistics, \(p>0.05\) is considered good enough to infer statistical significance, but we have to interpret the results carefully. The \(p\)-value may be interpreted as a measure of the risk of rejecting the null hypothesis. In this case, the null hypothesis is that the number is normal, having uniformrly distributed digits. Very low \(p\)-values would suggest that the number is not normal. Numerical evidence like this is of course not a proof. So far, there is no proof that any of these common irrational constants are normal.