BrainDen.com - Brain Teasers
• 0

## Question

Bonanova answered my recent problem, with

the correct answer of 0.25 along with a nice graphic which explains the

result. Now I am asking for an algorithm which produces discrete

distributions in a random way. What would your algorithm be? You may

use the answer to my "Probablity of a Probability?" problem to test

To recap what bonanova did:

Let S be the set {(x,y,z)|x,y,z are real & x+y+z=1 & 0≤x≤1 & 0≤y≤1 & 0≤z≤1}

(this is the set of all discrete probability distributions on 3 outcomes).

Pick a random element (a,b,c) from S, i.e., pick it in such a way that each

element of S is as likely as any other of being chosen. Bonanova showed

that the probability that all 3 of (1/6)≤a and (1/6)≤b and (1/6)≤c are true

is 0.25 for an (a,b,c) so chosen.

## Recommended Posts

• 0

Choose a and b randomly, uniformly [0,1].

We're going to break a unit stick in two places, a, and b.

c = min(a,b)

d=max(a,b)

x=c

y=d-c

z=1-d

##### Share on other sites
• 0

I mean #2. I'm sorry that I wasn't clear about that.

Here's a method for generalizing the solution to the OP to any arbitrary number of dimension

Let R( a, b ) be a random generator function that returns a uniform random number between a and b. We want to generate random vectors (x1, ..., xn ) such that x1 + ... + xn = 1, and 0 < xi < 1. Let's call this desired high dimensional region U. The outline of this solution is that we will generate vectors within U non-uniformly. However, we will compute for each generated vector a rescaling weight and will take it back to the uniform distribution. The method is as follows

1) We first compute a single vector (x1, ..., xn ) within U. The first (n-1) elements will require the random number generator R( ). The computation of the elements of this vector goes as follows

x1 = R( 0, 1 ) // the first element is a random number between 0 and 1

x2 = R( 0, 1 - x1 ) // the second element is a random number between 0 and (1- x1 )

...

xn-1 = R( 0, 1 - x1 - x2 ... - xn-2 )

xn = 1 - x1 - x2 - ... - xn-1

The vector generated above is guaranteed to be inside the region U. However, it is by no means considered to be a uniformly random vector within U. Fortunately, we can compute the rescaling weights to convert it to a uniform distribution.

2) For the vector described in 1), we compute the corresponding rescaling weight, w. Let

w1 = 1

w2 = 1 - x1

...

wn-1 = 1 - x1 - x2 ... - xn-2

Finally, the rescaling weight is

w = w1 * w2 * ... wn-1

3) Repeat step 1) and 2) for M times to obtain M vectors and M corresponding rescaling weight. Call this set of M generated vectors V. Call the set of rescalling weights W.

4) To generate a vector uniformly within U, we sample a single vector from the set V using the weights W. That is, the vectors with higher weights have higher chance of being selected, and vectors with low weights have lower chance of being selected. Repeat this step as necessary to get more uniformly distributed vectors in U

Edited by bushindo
##### Share on other sites
• 0

Choose a and b randomly, uniformly [0,1].

We're going to break a unit stick in two places, a, and b.

c = min(a,b)

d=max(a,b)

x=c

y=d-c

z=1-d

Brilliant solution, Captain. I can adapt the arguments from my previous answer to show that it will work.

For the three dimensional case, it's easy to visualize the solution space as a triangle, and see that the probability of x taking on any particular value n will drop off linearly from a maximum at x=0 to zero at x=1, that is, proportional to 1-x. The probability density p(x) would be 2-2x (scaled so the integral over the range x=0 to x=1 equals one), and the probability that a randomly generated x is less than some value n would be the integral of p(x) from zero to n, denoted prob(x<n) = 2n - n^2.

If you pick two random numbers (a, b) and set x = min(a, b), then the probability that x is less than any particular value n would be prob((a<n) OR (b<n)). Since 'a' and 'b' are independent, we can re-write this as

prob(min(a,b)<n) = prob((a<n) OR (b<n))

= 1 - prob((a>n) AND (b>n))

= 1 - (prob(a>n) * prob(b>n))

= 1 - ((1-n) * (1-n))

= 1 - (1 - 2n + n^2)

= 2n - n^2

So this proves that setting x = min(a, b) produces the desired probability distribution for x.

I suppose that technically you should now show that y and z are randomly distributed. Since we are now only considering cases where either (a<b) and we are considering the probability distribution of 'b' over the range (b=a to b=1), or cases where (b<a) and we are considering the probability distribution of 'a' over the range (a=b to a=1), I don't think there's much doubt that the probability distribution over this range is flat and I don't really think this needs to be explicitly proven.

For the four dimensional case, it's slightly more tricky to visualize. In the three dimensional space, the solution set is a triangle from (1,0,0) to (0,1,0) to (0,0,1). If you add on a fourth dimension of time w such that w+x+y+z = 1, then you can imagine that triangle moving inward over time (as w increases from 0 to 1) until it reaches the origin at (w=1,0,0,0). Therefore, if you can randomly pick a point inside the pyramid from (0,0,0) to (1,0,0) to (0,1,0) to (0,0,1), then you can assign that point to the appropriate time w such that it falls in the solution set for the problem.

Whether or not you can visualize it, the important point is that the solution set at any given point on, say, the x-axis is now practically a two dimensional object (like the cross section of that pyramid) which decreases in both dimensions linearly as x increases. That means that the area of the solution set for any particular point on the x-axis drops off from a maximum at x=0 to zero at x=1, but since it's a two dimensional object it now falls off proportional to (1-x)^2. The integral of this is Int[(1-x)^2] = Int[(1 - 2x + x^2)] = (x - x^2 + (x^3)/3); so to make the integral of the probability distribution over the range (x=0 to x=1) equal to one, p(x) = 3(1-x)^2. The probability that x<n is then Int[p(x)] (from x=0 to x=n) = 3x - 3x^2 + x^3 (from x=0 to x=n) = 3n - 3n^2 + n^3.

Now if you pick three random numbers (a, b, c) and set x = min(a, b, c), then the probability that x is less than any particular value n would be

prob(min(a, b, c)<n) = prob((a<n) OR (b<n) OR (c<n))

= 1 - prob((a>n) AND (b>n) AND (c>n))

= 1 - (prob(a>n) * prob(b>n) * prob(c>n))

= 1 - ((1-n) * (1-n) * (1-n))

= 1 - (1 - 3n + 3n^2 - n^3)

= 3n - 3n^2 + n^3

Proving that this achieves the desired probability distribution.

With four dimensional space, after the first coordinate is set, this reduces down to the three dimensional problem.

This sort of argument could be continued for higher dimensional spaces.

Edited by plasmid
##### Share on other sites
• 0

Choose a and b randomly, uniformly [0,1].

We're going to break a unit stick in two places, a, and b.

c = min(a,b)

d=max(a,b)

x=c

y=d-c

z=1-d

Brilliant and elegant solution, CaptainEd. It also has the bonus of generalizing easily to higher dimensions. I'm in awe.

##### Share on other sites
• 0

Nothing compared to the awe I feel reading the solutions and analyses by you, plasmid, Bonanova, and superprismatic. I avoided posting a half dozen solutions that were terribly non-uniform. And, of course, aside from plotting sample results in the 3-D case, I had no proof at all. I'm still trying to digest plasmid's proof.

But thank you, I enjoyed thinking of what superprismatic promised was a "simple method".

##### Share on other sites
• 0

Thanks for all the wonderful work you all have done on this problem. I think we all learned a lot.

Now, as promised a few days ago........

Here's what someone told me many years ago on how to uniformly generate a discrete distribution of dimension N:

Step 1: Generate N numbers, Xt, t=1,2,3,...,N, chosen randomly from the uniform distribution on (0,1].

Step 2: For t=1,2,3,...,N form Yt = -log(Xt).

Step 3: Let S be the sum of all the Yt, t=1,2,3,....,N

Step 4: For t=1,2,3,...,N form Zt = Yt÷S

The claim is that the resulting distribution, (Z1,Z2,Z3,...,ZN), is what is desired. My simulations make it

look true. Can anyone prove or disprove this method?

##### Share on other sites
• 0

you don't really need the log part.

taking the sum of all the values then dividing each value by the sum will make the total of the result = 1.

x/(x+y) +y/(x+y) = 1

##### Share on other sites
• 0

Phillip2882: Yes, but without the Log, the result is very non-uniformly distributed. Try graphing it.

##### Share on other sites
• 0

really? i have a hard time seeing how taking three random numbers, and then taking the log of them or any other function, would make them more uniform. i tried the following:

```
import random

import math

range1a = 0

range2a = 0

range3a = 0

range1b = 0

range2b = 0

range3b = 0

for i in range(0,100000):

value1 = random.random()

value2 = random.random()

value3 = random.random()

sum1 = value1+value2+value3

value1L = math.log(value1)

value2L = math.log(value2)

value3L = math.log(value3)

sum2 = value1L +value2L +value3L

value1 /= sum1

value2 /= sum1

value3 /= sum1

value1L /= sum2

value2L /= sum2

value3L /= sum2

if value1 < 1/3:

range1a += 1

elif value1 < 2/3:

range2a += 1

else:

range3a += 1

if value2 < 1/3:

range1a += 1

elif value2 < 2/3:

range2a += 1

else:

range3a += 1

if value3 < 1/3:

range1a += 1

elif value2 < 2/3:

range2a += 1

else:

range3a += 1

if value1L < 1/3:

range1b += 1

elif value1L < 2/3:

range2b += 1

else:

range3b += 1

if value2L < 1/3:

range1b += 1

elif value2L < 2/3:

range2b += 1

else:

range3b += 1

if value3L < 1/3:

range1b += 1

elif value3L < 2/3:

range2b += 1

else:

range3b += 1

print(range1a,range2a,range3a)

print(range1b,range2b,range3b)

```
and got the following: without log,
``` x<1/3  	1/3<x<2/3   2/3>x<1

149980	141766	8254

```
and with log
```
x<1/3 	1/3<x<2/3   2/3<x<1

166617 100008 	33375

```

out of 100000 trials.

doesnt seem like much of an improvement. maybe you could suggest some code to try?

##### Share on other sites
• 0

really? i have a hard time seeing how taking three random numbers, and then taking the log of them or any other function, would make them more uniform. i tried the following:

```
import random
import math
range1a = 0
range2a = 0
range3a = 0
range1b = 0
range2b = 0
range3b = 0
for i in range(0,100000):
value1 = random.random()
value2 = random.random()
value3 = random.random()
sum1 = value1+value2+value3
value1L = math.log(value1)
value2L = math.log(value2)
value3L = math.log(value3)
sum2 = value1L +value2L +value3L
value1 /= sum1
value2 /= sum1
value3 /= sum1
value1L /= sum2
value2L /= sum2
value3L /= sum2
if value1 < 1/3:
range1a += 1
elif value1 < 2/3:
range2a += 1
else:
range3a += 1
if value2 < 1/3:
range1a += 1
elif value2 < 2/3:
range2a += 1
else:
range3a += 1
if value3 < 1/3:
range1a += 1
elif value2 < 2/3:
range2a += 1
else:
range3a += 1
if value1L < 1/3:
range1b += 1
elif value1L < 2/3:
range2b += 1
else:
range3b += 1
if value2L < 1/3:
range1b += 1
elif value2L < 2/3:
range2b += 1
else:
range3b += 1
if value3L < 1/3:
range1b += 1
elif value3L < 2/3:
range2b += 1
else:
range3b += 1
print(range1a,range2a,range3a)
print(range1b,range2b,range3b)
[/code]

and got the following:

without log,

[code] x<1/3  	1/3<x<2/3   2/3>x<1
149980	141766	8254```
and with log
```
x<1/3 	1/3<x<2/3   2/3<x<1
166617 100008 	33375
[/code]
out of 100000 trials.
doesnt seem like much of an improvement. maybe you could suggest some code to try?

You're counting the wrong things.  The triplet (value1L, value2L, value3L) is distributed uniformly over the surface of the triangle whose vertices are (1,0,0), (0,1,0), and (0,0,1); but (value1, value2, value3) is not.```
##### Share on other sites
• 0

okay i gotcha super. thank you.

here's my corrected code, and the result.

```
import random

import math

area1a = 0

area2a = 0

area3a = 0

area4a = 0

area1b = 0

area2b = 0

area3b = 0

area4b = 0

for i in range(0,100000):

value1 = random.random()

value2 = random.random()

value3 = random.random()

sum1 = value1+value2+value3

value1L = math.log(value1)

value2L = math.log(value2)

value3L = math.log(value3)

sum2 = value1L +value2L +value3L

value1 /= sum1

value2 /= sum1

value3 /= sum1

value1L /= sum2

value2L /= sum2

value3L /= sum2

if value2 > 1/2:

area1a += 1

elif value1 >1/2:

area2a += 1

elif value3 >1/2:

area4a += 1

else:

area3a += 1

if value2L > 1/2:

area1b += 1

elif value1L >1/2:

area2b += 1

elif value3L >1/2:

area4b += 1

else:

area3b += 1

print(area1a,area2a,area3a,area4a)

print(area1b,area2b,area3b,area4b)

```

and the result:

without log:

16606 16927 49891 16576

with log

24977 24900 25155 24968

##### Share on other sites
• 0

In post 17 I explained a method for uniformly covering a triangle that is a modification of the simple problem of covering a square.

Stretch the square into a rectangle, skew the rectangle into a rhombus, then cut the rhombus diagonally into a triangle.

The points in the unused triangle can be discarded or reflected back into the used triangle.

This method, and appropriate methods for other 2-d and 3-d shapes, can be found on Wolfram's MathWorld site

##### Share on other sites
• 0

Thanks for all the wonderful work you all have done on this problem. I think we all learned a lot.

Now, as promised a few days ago........

Here's what someone told me many years ago on how to uniformly generate a discrete distribution of dimension N:

Step 1: Generate N numbers, Xt, t=1,2,3,...,N, chosen randomly from the uniform distribution on (0,1].

Step 2: For t=1,2,3,...,N form Yt = -log(Xt).

Step 3: Let S be the sum of all the Yt, t=1,2,3,....,N

Step 4: For t=1,2,3,...,N form Zt = Yt÷S

The claim is that the resulting distribution, (Z1,Z2,Z3,...,ZN), is what is desired. My simulations make it

look true. Can anyone prove or disprove this method?

My earlier solutions showed that a method that distributes points evenly will place them on the x-axis with probability distribution p(x) = 2-2x. The probability that a randomly selected point will have an x-coordinate less than some value 'a' is the integral of this; Int[p(x)] (x=0 to x=a) = (2a - a^2)

The important concept here is that while these formulas do not describe the only way to generate such a distribution, they do describe characteristics of a uniform distribution that must be satisfied by any solution that creates a uniform distribution.

Now to test the proposed solution: x = -log(xseed) / (-log(xseed) - log(yseed) - log(zseed)) where nseed is the "seed" random number between 0 and 1 that is used to generate coordinates. First, this can be simplified.

x = -log(xseed) / (-log(xseed) - log(yseed) - log(zseed))

x = log(xseed) / (log(xseed) + log(yseed) + log(zseed))

x = log(xseed) / log(xseed * yseed * zseed)

If this were simply a function of xseed then the probability that this will be less than some value 'a' would be given by solving for the values of xseed that would produce a value of x less than 'a'. Note that as xseed increases, x will decrease, so x will be less than 'a' for any xseed greater than the xseed that generates x=a. So (the probability that x<a) is equal to (one minus the xseed that generates x=a).

Solving for the xseed that generates x=a,

log(xseed) / log(xseed * yseed * zseed) = a

log(xseed) = a log(xseed * yseed * zseed)

xseed = (xseed * yseed * zseed)^a

xseed^(1-a) = (yseed * zseed)^a

xseed = [(yseed * zseed)^a] ^ (1/(1-a))

xseed = (yseed * zseed)^(a/(1-a))

Now since [the probability that x<a] is equal to [one minus this xseed] (if you are given any particular yseed and zseed)...

Prob(x<a) = 1 - (yseed * zseed)^(a/(1-a))

Since yseed and zseed are independently generated random numbers, what you really are looking for is the integral of this over the domains of yseed and zseed.

Prob(x<a) = Int[1 - (yseed * zseed)^(a/(1-a))] (yseed=0 to 1 and zseed=0 to 1)

First integrating on yseed,

Prob(x<a) = Int[1 - (yseed * zseed)^(a/(1-a))] over (yseed=0 to 1)

= Int[1 - yseed^(a/(1-a)) * zseed^(a/(1-a))] over (yseed=0 to 1)

= yseed - zseed^(a/(1-a)) * Int[yseed^(a/(1-a))] over (yseed=0 to 1)

= yseed - zseed^(a/(1-a)) * (1-a) yseed^(1/(1-a)) over (yseed=0 to 1)

= 1 - zseed^(a/1-a) * (1-a)

Then integrating on zseed

Prob(x<a) = Int[1 - zseed^(a/1-a) * (1-a)] over (zseed=0 to 1)

= zseed - (1-a) * Int[zseed^(a/1-a)] over (zseed=0 to 1)

= zseed - (1-a) * (1-a) zseed^(1/1-a) over (zseed=0 to 1)

= 1 - (1-a) * (1-a)

= 1 - (1 - 2a + a^2)

= 2a - a^2

To recap, I've just proven that using your friend's method, Prob(x<a) = 2a - a^2. And I've proven in the previous posts that for a uniform distribution Prob(x<a) = 2a - a^2. So, although this is not a very intuitive way of proving it, it does prove that the proposed method creates a uniform distribution.

##### Share on other sites
• 0

My earlier solutions showed that a method that distributes points evenly will place them on the x-axis with probability distribution p(x) = 2-2x. The probability that a randomly selected point will have an x-coordinate less than some value 'a' is the integral of this; Int[p(x)] (x=0 to x=a) = (2a - a^2)

The important concept here is that while these formulas do not describe the only way to generate such a distribution, they do describe characteristics of a uniform distribution that must be satisfied by any solution that creates a uniform distribution.

Now to test the proposed solution: x = -log(xseed) / (-log(xseed) - log(yseed) - log(zseed)) where nseed is the "seed" random number between 0 and 1 that is used to generate coordinates. First, this can be simplified.

x = -log(xseed) / (-log(xseed) - log(yseed) - log(zseed))

x = log(xseed) / (log(xseed) + log(yseed) + log(zseed))

x = log(xseed) / log(xseed * yseed * zseed)

If this were simply a function of xseed then the probability that this will be less than some value 'a' would be given by solving for the values of xseed that would produce a value of x less than 'a'. Note that as xseed increases, x will decrease, so x will be less than 'a' for any xseed greater than the xseed that generates x=a. So (the probability that x<a) is equal to (one minus the xseed that generates x=a).

Solving for the xseed that generates x=a,

log(xseed) / log(xseed * yseed * zseed) = a

log(xseed) = a log(xseed * yseed * zseed)

xseed = (xseed * yseed * zseed)^a

xseed^(1-a) = (yseed * zseed)^a

xseed = [(yseed * zseed)^a] ^ (1/(1-a))

xseed = (yseed * zseed)^(a/(1-a))

Now since [the probability that x<a] is equal to [one minus this xseed] (if you are given any particular yseed and zseed)...

Prob(x<a) = 1 - (yseed * zseed)^(a/(1-a))

Since yseed and zseed are independently generated random numbers, what you really are looking for is the integral of this over the domains of yseed and zseed.

Prob(x<a) = Int[1 - (yseed * zseed)^(a/(1-a))] (yseed=0 to 1 and zseed=0 to 1)

First integrating on yseed,

Prob(x<a) = Int[1 - (yseed * zseed)^(a/(1-a))] over (yseed=0 to 1)

= Int[1 - yseed^(a/(1-a)) * zseed^(a/(1-a))] over (yseed=0 to 1)

= yseed - zseed^(a/(1-a)) * Int[yseed^(a/(1-a))] over (yseed=0 to 1)

= yseed - zseed^(a/(1-a)) * (1-a) yseed^(1/(1-a)) over (yseed=0 to 1)

= 1 - zseed^(a/1-a) * (1-a)

Then integrating on zseed

Prob(x<a) = Int[1 - zseed^(a/1-a) * (1-a)] over (zseed=0 to 1)

= zseed - (1-a) * Int[zseed^(a/1-a)] over (zseed=0 to 1)

= zseed - (1-a) * (1-a) zseed^(1/1-a) over (zseed=0 to 1)

= 1 - (1-a) * (1-a)

= 1 - (1 - 2a + a^2)

= 2a - a^2

To recap, I've just proven that using your friend's method, Prob(x<a) = 2a - a^2. And I've proven in the previous posts that for a uniform distribution Prob(x<a) = 2a - a^2. So, although this is not a very intuitive way of proving it, it does prove that the proposed method creates a uniform distribution.

Thanks, Plasmid. I haven't had time to go through this thoroughly, but it looks pretty good so far. You've solved one of the little mysteries which have been nagging me for years!

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.