This is not quite correct. Generating a random vector using three calls to rand() is the equivalent of picking a random point inside a cube. If you normalize the points, you're effectively squishing the corners of the cube in toward the center to make a sphere. Points near the corners of the containing cube will appear more frequently than the top, bottom and 'sides'.
@ginko's comment is correct - you can fix the algorithm by throwing out any points that lie outside the sphere before normalizing.
As others have pointed out, the distinction between randn() and rand() is crucial here - the former gives you points from a (spherically symmetric) normal distribution, the latter gives you uniform points from the unit cube.
One of the advantages of the normal-sampling route over @ginko's rejection-based method is that in high dimensions almost all of the volume of the unit cube is situated outside of the unit sphere (the volume of the unit cube is always 1, whereas the volume of the unit sphere decreases exponentially with dimension). So the rejection method becomes exponentially slow in high dimensions, while the Gaussian method still works just fine.
In three dimensions (to keep the notation reasonably simple) you need a distribution with density f such that f(x)f(y) f(z) is a function of x^2 + y^2 + z^2 (i. e. the distance from the origin). It looks like the Gaussian (up to scaling) is the only one.
Without loss of generality, take f(xi) = k1 * exp(-g(xi)) [1], for some g. Then we need the joint pdf to satisfy f(x1,...,xn) = k2 * exp(-h(R^2)), R=sum(xi^2)^1/2 (the R^2 and h(.) is w.l.g. too). So we get g(x1)+g(x2)=h(x1^2+x2^2). Then assuming the functions g and h analytic we end up needing g(x)= k * x^2, otherwise we get cross terms in the Taylor expansion that can't be cancelled out for all xi. Sounds good?
[1] The function f trivially needs to be symmetric, justifying no loss of generality.
Hmm I'm not sure I get your point. I'm trying to prove that the joint pdf of N iid RV's is isotropic if and only if the RV's are gaussian. If I assume the pdfs are gaussian in the first place the proof isn't valid?
I don't totally understand the math, but davmre is suggesting calling randn(), not rand() -- so sampling from a normal distribution instead of a uniform distribution.
The math is fairly simple. The probability distribution of a multivariate standard gaussian has a simple form that is f=a * exp(-x1^2+...+xN^2)=b * exp(-R^2), where a and b are some normalizing constants and R is the norm of the position , that is, it is obviously rotationally symmetric [1].
But that pdf is also the joint pdf of N i.i.d. gaussians, evident by decomposing f=a * exp(-x1^2) * ... * exp(-xN^2) [2], which is the joint pdf x1,...xN s.t. fx1=c * exp(-x1^2), ..., fxn=c * exp(-xN^2).
[1] Since exp(-R^2) does not depend on direction but only on distance from the origin
[2] The fact that f(x1,...xN)=f1 * ... * fN if x1,...xN are independent follows directly from the fact that P(A & B) = P(A)*P(B) if A and B are independent events.
Great point, you are correct. It's not enough for the distribution of the random vectors to be rotationally symmetric. It must be isotropic, looking the same in every direction from the origin, which a cube does not. (Nor does any other method deriving from a polyhedral solid, as I was thinking might have been possible, through something like selecting a random face from an icosahedron then a random point on that face.)
@ginko's comment is correct - you can fix the algorithm by throwing out any points that lie outside the sphere before normalizing.