Hacker News new | comments | ask | show | jobs | submit login
Generating random points inside a sphere (karthikkaranth.me)
252 points by kkaranth 6 months ago | hide | past | web | favorite | 121 comments



A note about the rejection method: in high dimensions, this becomes very expensive. This is because the volume of a ball inscribed in a unit cube goes to zero (very quickly) as the dimension goes to infinity, so you start rejecting points with high probability.



Is there a shape that resembles a sphere (not a cube and not a point) but in higher dimensions will take the same relative volume as in lower dimensions such that this rejection probability stays the same?


I guess there's a new generation of cryptographic primitives implied by your assertion - given a parametric high-dimensional unit sphere, ...


I think that might be premature optimization to worry about finding random points in an n-dimensional hypersphere where n is a huge number.


This is not true! Nearly all of the hypervolume of a high-dimensional n-ball is located near its surface. A popular way of stating this is "If you peel a high-dimensional orange, there will be almost nothing left." The ratio of "rind" to "pulp" increases very, very quickly.

A good intuition for why this happens is that the distance from the center of an n-dimensional hypercube to any of its corners is `r * sqrt(n)`, but the distance from the center of a ball to its surface is always just `r`. So if `r` is fixed, and `n` keeps increasing, the corners get "spikier" and farther from the center. Very quickly, almost all of the hypervolume of a hypercube is located in these remote corners, and almost none of it is in the ball at the center.

See https://en.wikipedia.org/wiki/Curse_of_dimensionality


Seems to me it’s still premature optimization if you expect to use this primarily for low values of n (eg three).


The comment I was replying to stated that n was a "huge number", presumably bigger than 3.


I'm pretty sure you're reading it wrong.

I interpret the comment as saying "it's premature to worry about (finding points in n dimensions where n is large)". This interpretation implies n is not large.

Not "it's (premature to worry about finding points in n dimensions) where n is large". This interpretation implies n is large, but that you don't care.


Sure, but if you can't prematurely optimize in HN comments, when can you?


It is definitely an interesting problem how to sample uniformly from the set of n-dimensional vectors with an L2-norm smaller or equal to 1. I am sure that it has practical applications, too. Also, nobody said that n has to be huge.


No, n does not need to be huge, n=8 is high enough.


In this context 8 is a huge number.


Not at all. There are plenty of circumstances in all kinds of numerics where one would want points on the sphere in high dimensions.

It's not just a geometric object that has limited practical value for d>3. It is the set of points where the norm is unity, which is a pretty fundamental concept in all sorts of places.


Not as part of a graphic's library. You might want to do 4d or 5d. But past that is very unlikely.

In terms of actually solving to proble you just want a vector random distance and a function that to map random distance to actual distance. But if you never deal in 6+ dimensions that's a waste of time.


Well, not in graphics libraries obviously, but it comes up quite a bit in other areas. Vectors with a norm of unity are just a fundamentally important set.


Who said anything about graphics?


> While working through Peter Shirley’s Ray Tracing in One Weekend (http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-we...)

Which is where that method was used.


Hi boss, we didn’t move those boxes, because the crane only carries up to 8 tons, and the boxes weigh 3, so we’ve started building a new crane system instead that will scale to lift arbitrarily large weights. I’ll probably be a couple of years before we get to actually moving the boxes.

When your problem is n=3, anything larger than 3 is “huge”.


Optimizing for anything more (or less) than three dimensions is premature.


The graphics are nice, but this article really could have used some complexity analysis and some probability theory.

The author neither discusses the asymptomatic complexity of the algorithms, nor the run-time of his implementation (which is more pertinent here), nor gives any proofs of why some of the algorithms sample uniformly from the unit ball and why some of them don't.

Also, it would have been really nice to generalize this problem to n dimensions. I assume that there is some small value of n where naive rejection sampling is worse in practice than one of the more sophisticated methods.


Does he need to? All algorithms given except the first appear to be O(N), and their performance is going to depend almost entirely on how fast the local implementations of trig operations involved are.


Isn’t the first method still O(N)? You will reject a constant percentage of points on average (the author says 48%), but constants fall out of big-O.


depends how random works, techinically it may never exit the loop. Rejection algos could be quite dangerous under specific conditions. I would agree with the grandparent - it's a weak article.


That's true if you assume a while pending on rand() is constant, but that wasn't covered in my CS 101 on complexity analysis...


Average-case constant. Worst case? Never halts.

Generating a single point doesn’t depend at all on our choice of n (the number of points generated). But yeah, complexity analysis of stochastic algorithms is fairly advanced stuff.


It never halts under the condition of infinitely small probability. So it never never halts.


Measure 0 and "never" are subtly different. There do exist sets of random draws for which the algorithm never halts. The probability of getting such a draw is zero, but they exist.

It's like picking a random number uniformly between 0 and 1 (in theory, not on a discrete computer). Getting 2 is impossible, never happens. Getting say 1.5 has probability 0, but it is possible.


Well, for the starts - I said it was an "infinitely small probability" (just as you've said it's zero), so I said "never" because it's more of a practical question.

With the same logic you can say that sha-256 is easily reversible by iterating over all possible mappings from sha256(z) to x. But it isn't. Purely because of the sheer scale of that number.

And to be clear: reversing sha-256 under the condition of sha-256 being a perfect hash function is the same as drawing 256 zeros from a purely random function.

The best way to go about the complexity of this algorithm is the runtime density function, where the total number of samples defines how many additional (false) iterations we'd need to do on average.


(Getting 0.5 I think you mean :)


Right, thanks


Yeah, the worst case is more of a limit. If you're really unlucky the algorithm can take an arbitrarily long time to halt.


"about 48% of the points chosen are discarded. This may not be a huge problem if the number of points that need to be generated is small"

... or if the number of points isn't small -- it's only 2x the cost of the simple linear case without a loss of uniformity or adding any costly trig functions. That's still pretty good!


2x the compute resources is still horrible for our environment, which isn't pretty good to me. Computer Scientists/Software Engineers need to get it into their head that wasting compute resources is directly increasing our footprint on this planet. We need to simplify the web not because of bandwidth use, but because all that compute resource moving around useless ads and scripts is largely a waste. We need to stop using block-chain for everything where a centralized ledger would suffice because block-chain is literally wasting resources in those problems for no reason. We need to start thinking about our software's impact on the environment due to wasted compute cycles for things that are unnecessary or which could be improved by reducing the cycles needed and stop doing that lazy or fancy stuff.


If you read the comments, you'll see that the "wasteful" method is actually the cheapest computationally.


I'm responding to the general lack of concern by the commenter above and most engineers I know. You and this other guy commenting are missing the point, I'm not talking specifically about this algorithm, but about engineers in general who don't care about wasted cycles.


Yes but that's the entire point of these comments, wasting cycles is a great optimization but unless the goal is purely intellectual exercise, you lose more time with expensive trig functions. The real world favors the naive approach, in this context + case.


Every cycle is sacred.


If a cycle is wasted, Knuth gets quite irate

Tangentially, I believe the topic of generating random points inside the unit ball is covered in vol 2 of TAOCP, with all the math you could wish for to justify the different methods.


2x compared to a simpler problem, not compared to the optimum.


Correct uniform sampling in things like this comes up a lot in Monte Carlo radiation transport problems and is covered very well in your basic first year graduate Monte Carlo class in a good nuclear engineering program. If you want to go deeper, check out the literature around that.


> in your basic first year graduate Monte Carlo class

How many years of graduate classes are there on Monte Carlo methods for there to be a 'first year' one?!


I think he/she means the Monte Carlo class you take in your first year of nuclear grad courses.


Yeah, generally there's a lot of coursework like this in the first year. Then if you specialize in it you're going to take like 2-3 advanced classes plus relevant math and computer science stuff.


Depends if your graduate focus is on core neutronics or core thermodynamics.


Monte carlo based sampling shows up also in finance, graphics programming, materials science, etc. Graphics programming and physics were areas where uniform sampling of a geometrical surfaces were important for me.


That (last method) is a lot of sines and cosines, cube root and an acos... I bet for 3D the rejection method is not that much slower, in wall clock time.

OK so I coded up the first one and the last one in C and compiled on my mac using whatever gcc links to. I found 1e7 points in the sphere:

first routine: 0.521s

last routine: 0.850s

[edit]: C code here: https://pastebin.com/zH0BehSM


Note that his last way does cos(acos(cosϕ))) after generating cosϕ uniformly. Which is a waste, you can just set sinϕ = sqrt(1-cosϕ^2).

After fixing that, this way is faster for me. Some trig, but no branches.

Edit: https://pastebin.com/UExZ5Siw


But you used Julia. I'm almost certain that if you applied your optimization to the C code you'd find that it makes no difference because `gcc -O2` already made it.


Gcc collapses acos(cos(x)) to x on -O2? Surely not since this isn't even accurate for x outside of -pi to pi. That's ignoring floating point errors that make those two different which I thought was a different optimization flag, though I'm not sure about that.


Well, here it's cos(acos(x)) and sin(acos(x)), which are easier. But still that's pretty smart, I did not know this.


Julia is usually ran with LLVM -O3. Is LLVM just missing that optimization then? This sounds like something the optimizer wouldn't due without fastmath flags enabled.


The first routine is about 3 rands per point minus rate of discarded, about half? So 6 rands per point. Now, since the second one has three rands in it, and plenty of trig, it really becomes a question if you can make the trigonometry faster than three rands. But as you saw, straightforward attempt will pretty much certainly end up having worse performance.

But what if one used optimizations like lookup tables and trigonometric approximations instead? Could they end up costing less cycles than three rands? Maybe not LUTs, maybe other trickery someone knows of?

And what about the cost of those three rands? Could one use a fast xorshift with good enough results here? Does one even need thee random vectors? Maybe two + modulo trickery will do?


Typically the generation of the uniform random numbers can be made quite cheaply, especially if the resolution is not critical (e.g. if uniform floats are enough and not uniform doubles etc) or if the quality of the randomness is not critical, and in these cases discarding half of these will win versus other costly operations.


If we are trying to optimize anything, you should probably drop all the unnecessary floating point divisions. I would expect those to be the bottleneck in your code, though you’d probably want to time it to be sure.

(rand()/(double)RAND_MAX) etc.

Instead assign some name to 1.0/(double)RAND_MAX, and multiply.


This problem is equivalent to generating random orientations, which can be easily done generating random unit quaternions (see, for example, [1]). Each quaternion found will correspond to a point in the surface of a 4D sphere, which can be mapped to a point inside a 3D sphere.

[1] http://mathproofs.blogspot.com/2005/05/uniformly-distributed...


If you take the stereographic projection of a uniform random unit quaternion onto 3-space, then to get a uniform distribution in the ball you need to apply a non-linear scaling function to the radius.

I’m not sure this really makes much sense as a way to generate random points in the ball (especially if you care about the details of rounding etc.), but maybe fun for someone to implement. I would expect the rejection method to be noticeably faster.

* * *

Another cute thing you could do is generate 3 normally distributed random numbers (giving a normally distributed random point in a 3-space), and then apply the appropriate non-linear scaling to the radius.


Interestingly, it seems to be all about bending some distribution and then unbending it again. Orthogonal dimensions are independent, so choosing three random numbers generates uniform in-cube distribution. But once you bend it via polar coordinates or vectors, you “wrap” many points of polar/vector abstract space to near the cartesian origin. And then you have to bend back, to make it uniform again. That raises interesting question on whether it relates to topology problems. Since generating N randoms is a fastest primitive operation, is there a quick and uniform translation from a cube to the sphere, such that all points are still uniform? Can we apply space-filling curve magic here (like Hilbert’s)? I feel like there is much more geometry power under that, but sadly I’m just a guy with no math bg.


It's an interesting idea, but I think it would prove to be equivalent.

As another commenter pointed out, the efficiency of this operation really only matters in higher dimensions.

I bet a generative model would work, though it's tricky enough for me not to be able to think it through on the spot. Something like:

1. Pick a subset of points thus far generated at random.

2. From this subset, pick the point with the furthest distance from the origin.

3. Form a new unit sphere where r is 1-d of the picked point (ie, make a new sphere that is a subset of the parent sphere with a single touching point of the parent sphere).

4. Generate a new point in this subsphere using the normalized vector method.

Not exactly the above, but something like it. The downside of the above approach is that I think it will create a distribution of points that may be unnaturally uniform. It would be interesting to define a set of equations that could benchmark different approaches for how well they compare to the in-cube strategy.


The main issue with the second method, which isn't discussed at all, is that it's more likely to pick a direction towards the corners of the unit cube. You can even see that bias in the point cloud.

The problem with choosing d uniformly is also better explained by pointing out that shells of larger radius have larger surface, thus they'll be underrepresented.


Interestingly, the method in which you determine your "random point" can greatly affect the end result, so swapping out the methods (e.g. "generate three coordinates and discard" method for the "generate a direction and a length" method) may produce wildly different results.

An interesting illustration of this is Bertrand's Paradox, which asks the question: "if you draw a random chord inside a circle, what is the probability that its length is greater than sqrt(3)?". The correct answer is 1/2. The correct answer is also 1/3. Oh, and the correct answer is also 1/4. It depends on three equally valid (and uniformly distributed) ways of defining your random chord.

See: http://web.mit.edu/tee/www/bertrand/


Paradox seems like a strong word for this. "Draw a random chord" is almost a pun, draw from what distribution? Until you tell me, there really is no one answer.


> "Draw a random chord" is almost a pun

No more so than 'pick a random point in a circle'. Note in all cases we're talking the uniform distribution. The question of using Cartesian coordinates, parametric coordinates, etc. is what causes the differences.


I'm complaining that there isn't one uniform distribution. For example uniform in x on [1, 2] is different to uniform in 1/x on the same interval, and which one you draw from may matter. You have to be told. In fact every (one-variable) distribution is uniform in something!

"pick a random point inside a circle" comes with an implicit "uniformly according to the ordinary euclidean metric on the paper". I guess it's a judgement call that this is obvious enough, but "a random chord" is clearly not... otherwise we wouldn't be here!


When the author mentions "normally distributed numbers" it sounds like he is sampling a normal (Gaussian) distribution. In fact, he is sampling from a uniform distribution.


I think he does mean sampling from a gaussian. You can tell because the example calls randn().

The gaussian distribution is invariant under rotations in 3d (and more generally invariant under any orthogonal transformation in any dimension), so if one takes a random vector X whose components are independent N(0,1) random variables and compute X/|X| (where |X| is the usual euclidean norm of X) the result is guaranteed to be uniform on the unit sphere.


Indeed, I've used randn() wherever a normally distributed random number is needed. This is computed using the Box-Muller method.


For people who like the simplicity of the rejection algorithm, it might be worth checking out Ziggurat sampling which is based on a similar idea, but shapes the sampling region to reduce rejections: https://en.m.wikipedia.org/wiki/Ziggurat_algorithm


To add to that comment, importance sampling is also an attempt to reduce the number of rejections and could be interesting here.

https://en.m.wikipedia.org/wiki/Importance_sampling


Importance sampling works when you're interested in some linear function[al] of the distribution; not for actual sampling.


Pleasantly surprised by the effect when scrolling over the sphere


I couldn't see any spheres at all so I was confused by your comment, which led me to realize WebGL wasn't working, which in turn reminded me I had disabled hardware acceleration in Chrome long ago because of some bug which made browsing unbearably slow. I've now re-enabled it and it seems the bug has since been fixed. So, thank you!


It’s the forced graphics card switching that causes system crash in every 20th case.


Same, you can also drag them around to a new rotation.


Related: selecting points on the surface of a sphere https://www.jasondavies.com/maps/random-points/

Poisson disc sampling can be quite useful.


I guess what it would look like in 3D :)


For the second example, sampling the radius uniformly leads to an error with a higher point density in the centre of sphere. The radius, r, has a weight proportional to r^2 or the surface area. So, one can take a uniform random sample, s between 0 and R^2, and take the square root of s. That will give higher probability density around larger radius. Then, the second picture should look like the first one.


Seems like you stopped reading after the second example.


Yes, didnt look at the details of the next methods. He adjusts for this in the next examples by taking the cube root of the uniform sample(not square root as in my post), which is the correct adjusting factor, as a small shell will have weight in proportion to difference between cubic powers, which is proportional to r^2.


Similar problem which ends up not being quite as simple in the general case as you might think: sampling from a unit simplex: http://www.cs.cmu.edu/~nasmith/papers/smith+tromble.tr04.pdf


I think the section "Using normally distributed random numbers" deserves an additional explanation. The code may look simple, but like most of the other methods mentioned on the page, generating normally (Gaussian) distributed random numbers still requires the use of rejection sampling and sine/cosine functions. See https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform


I implemented Marsaglia's Box Muller method in a library (^) and it seems quite quick. It doesn't involve trig functions, just a sqrt, a log, and an average of about two rands per normal outputted.

It seems to work out around the same speed as a trig function - in javascript.

For some purposes just averaging a few rands could be used to get a good enough approximation of a normal.

I wonder if a short Taylor series could be mined that could quite accurately map equal to gaussian distribution.

[^]http://strainer.github.io/Fdrandom.js/


I thought this was kinda fun, and I was trying to figure out why you couldn't just take a random x and use it to find a y that preserves x²+y²=r² (ie by setting y to a random number between 0 and ±√(r²-x²)). In the end I figured out that you need to generate x the same way, but if you have, say: n=rand(r), x=rand(±√(r²-n²)), y=rand(±√(r²-x²)) it seems to work.

Here's a version I extended to three dimensions, designed to be run in the JS console of the original article (it'll show up underneath the first sphere):

    new SphereSimulation(document.getElementById("spheres1"), () => {
      const rand = Math.random
      const root = (x) => rand() < 0.5 ? Math.sqrt(x) : -Math.sqrt(x)
      const invdist = (a=0, b=0) => root(RADIUS*RADIUS - a*a - b*b)

      const x1 = rand() * invdist()

      const x2 = rand() * invdist(x1)
      const y2 = rand() * invdist(x2)

      const x3 = rand() * invdist(x2, y2)
      const y3 = rand() * invdist(x3, y2)
      const z3 = rand() * invdist(x3, y3)

      return new THREE.Vector3(x3, y3, z3)
    })
It looks okay to me - am I missing something? Is this a reasonable approach?


Your missing the requirement that the random sample be uniform in the sphere. That is, if the sphere is cut into any number of pieces of equal volume, the sample is equally likely to appear in any piece.

Your approach samples uniformly from the radius, so you're going to end up with far too many samples close to the center.


Interesting... thanks for the pointer. I did some testing and the points definitely do seem to cluster along the axis... just increasing the number of iterations of √(r²-n²) seemed to improve it though. At 5, I can't tell the difference anymore:

https://codepen.io/anon/pen/xJJOEq


I posted a comment where I described this method as my preferred solution. You did not need to patch it up though..its fine to just generate the 3 numbers and then choose a random assignment from them to x y and z. This fixes the bias that would be present if you always generated x first.


So what ends up being the general purpose method for uniformly distributed random numbers in some arbitrary shape? (other than the rejection method of course)

If arbitrary shapes always requires the rejection method, what are some "classes" that have easier algorithms (the sphere is obviously included here)


Not my area of expertise, but based on what I've read, you general try to get as close as possible through pure mappings, and then use rejection sampling to throw out the samples that don't fall under the desired distribution. This can work even if the distribution is dynamically changing.


This is also very important in many global illumination techniques which requires sampling the hemisphere around a point. As case in point a simple Monte Carlo path tracer uses this to sample random directions, see e.g. [0]. In addition to this there are many 'tricks' one can use to reduce the variance/noise using a differently weighted sampling (to help the converge of MC). A nice technique to get a cosine-weighted hemisphere sampling is to generate a random point on the unit disc and then project them onto the (hemi)sphere. Seems somewhat relevant here.

[0] https://en.wikipedia.org/wiki/Path_tracing#Algorithm


The method I immediately think of is just based on the relation.

Sqrt(x^2 + y^2 +z^2) <= 1 reduces to x^2 + y^2 +z^2 <= 1

Generate the random value for x^2, any random value <= 1

Now we have y^2 +z^2 <= 1-x^2

Generate a random value for y^2, any random value <= 1-x^2

Finally generate a random value for z^2, any random value <= 1-x^2-y^2

Take the square roots of each number we generated to get the final coordinates.

But wait! Theres another step..the most important one! There are 6 orderings of (x,y,z) We must choose a random one because otherwise x will be biased towards higher values compared to y and z.

So I kind of lied. All we needed was 3 random values via the subtraction methods above. Then we need to randomly choose their ordering as (#,#,#)

This seems like the easiest most trivial solution that would be uniformly distributed.

Let me know if I am wrong.


You are wrong.


>Let me know if I am wrong.

You're wrong on multiple levels.

1. Generating random values for x^2, y^2, z^2 and taking their square root will only give you values in the x,y,z > 0 octant.

(But let's say you "fix" this by randomly multiplying them with -1.)

2. Taking the square root of a uniformly distributed random variable is no longer uniformly distributed.

3. Randomly reordering the coordinates won't fix your bias.

Here's a demonstration in 2D: https://jsfiddle.net/tz85wnxy/59/

Uncomment line 17, 18, 20 to see how it's still not uniform even if you randomly multiply the coordinates by -1 and reorder them.


Works fine as long as you generate x,y,z randomly and not their squares.

Same process but minor tweak.

https://jsfiddle.net/w8zLvsy2/


This approach is still incorrect, and the bias toward the set of 4 points S = {(1,0),(-1,0),(0,1),(0,-1)} can be seen in the diagram. There is no way to break out of this bias by flipping or reordering coordinates because those two operations preserve the symmetries of S:

- if you flip the x-coord of a point near S, e.g. (-.9,0), you still get a point near S, e.g. (.9,0)

- if you swap the x-coord and y-coord of a point near S you still get a point near S.

- etc.



So which algorithm ended up being fastest?


To my untrained eye, it looks like the first naive algorithm would be fastest. Two of them require trig functions while the other relies on sqrt and cube root, all of which are expensive functions when compared to the first approach.

So I'd wager that the 50% reject rate would still be cheaper at scale than evaluating trigonometric functions.


A quick test with C++'s <random> library agrees. For n = 100,000,000 points, the rejection method beat out the improved spherical by 7.8%. Using the normal distribution was even slower.

  XYZ: 16.418057 seconds
  Gauss: 23.415560 seconds
  Spherical: 17.705553 seconds


Those results are surprisingly close.


Youll get a different result if random number generation is made more expensive, e.g. if you need to use a hardware RNG instead of a PRNG.


But why would you do that?


But you might want to consider average case time and worst case time. The first algorithm might have a good average case time, but its worst case time is infinity, which might be unacceptable.


The probability of experiencing bad cases decays exponentially with the number of iterations. 0.48^n.

Your CPU would spontaneously undergo fusion by tunneling before hitting on a case where it's stuck on a bad case for more than 10 microseconds.


Could you say a little more about what relevance 0.48 has here?


From the article: "about 48% of the points chosen are discarded".


It's the fraction of the volume of a cube enclosing (bounding) a sphere to the volume of that sphere.

  1 - 4/3*pi*r^3 / (2*r)^3 = ~0.4764
The algorithm samples points inside a unit cube and rejects those not inside the unit sphere, so that's the fraction of points it'll be discarding.


But if the randomness was based on secret information, you've now introduced a timing or power analysis attack to extract that secret information. The other solutions, although slower, have the advantage of being constant-time.


Depends on how fast your PRNG is, but I would expect the rejection method to be best under pretty much any circumstances. The code is also simplest. If PRNG speed becomes a bottleneck just use a faster PRNG.


Section 3 of this paper explains this topic pretty well in N-dimensions:

http://jmlr.org/papers/volume17/blaser16a/blaser16a.pdf

To get a random point within the sphere (rather than on the surface of the sphere, as described in the paper) you also need to pick a random distance from the origin to randomly scale the point on the surface.


This is similar to the method used for generating random numbers using Box-Muller transform.

https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform


Maybe a mathematician can correct me. But could you not also model 3D vertices of uniform density into a 1D space and then uniformly sample the 1D space. Of course this may have issues with highly dense objects or very large ones.


I don’t get what you mean by 1D spaces but you could triangulate into tetrahedra and sample those[1] (taking their volume into account). You will write a lot of code and still have the triangulation approximation error. For high dimensional spheres you can possibly subdivide the space in hyper cubes and do the in/out approach inside each cube that contains a part of the sphere to get a better hit miss ratio.

[1] http://vcg.isti.cnr.it/jgt/tetra.htm


Yes, I was thinking you could triangulate the sphere at discreet shells. I have done 2D symmetric polygons where you needed to uniquely index into a 1D array of vertex values. Ie, based on the magnitude you can estimate which shell. By knowing the shell, you know the number of vertices for the shell, by knowing both you can map it to a specific vertex, as you can define a priori the absolute traversal order.

I have no idea if the 2D case extends to higher dimensions.


I find it unclear whether the author is aiming for Uniform, or just “looks pretty uniform”.


Notice how the rejection method becomes useless for higher dimensions.

https://en.m.wikipedia.org/wiki/Curse_of_dimensionality


While that's interesting, I'm having a complete lack of imagination fail on why one would be trying to solve this problem for higher dimensions, other than academic pursuits. Could you give a practical example?


“Sampling from the uniform distribution on the N-dimensional Euclidean ball (the N-ball) and its surface (the N-sphere) is a tool useful in many diverse research fields, for instance Monte Carlo integration in physics, generating random directions for MCMC sampling on bounded regions, generating random correlation matrices, Monte Carlo analysis of random packing on the sphere, generating random rotations in cryptography as well as various simulation studies in statistics”

https://www.sciencedirect.com/science/article/pii/S0047259X1...


I don't agree with the way he normalizes the radius in the last 2 methods. I think it should be a square root, not a cubic one. This is because the volume diferential r^2•sin(phi)•d_phi•d_theta•dr is proportional to the square of r. This mistake is also made in a source he cites: https://math.stackexchange.com/questions/87230/picking-rando... and I tried to point it out there as well.


That's the correct volume element, but not what you want.

The volume inside radius r grows as r^3. It's the inverse of this _cumulative_ density function which maps uniform-in-r numbers to uniform-in-volume ones.


Did you empirically check your hypothesis?

What's the integral of r^2 dr?

Think about whether your claim makes sense when n = 1.


You both are right. Thank you for the comments.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: