Jump to content
BrainDen.com - Brain Teasers
  • 0


superprismatic
 Share

Question

You have two bags containing balls.

Bag A contains 2 white balls, bag B

contains 2 black balls. You randomly

pick a ball from bag A and place it

into bag B, then you randomly pick a

ball from bag B and place it into bag

A - this will be called an iteration.

Note that after any number of

iterations the two bags will each

contain 2 balls. What is the

expected number of iterations needed

to go from the initial configuration

of 2 white balls in bag A and 2

black balls in bag B to a final

configuration of 2 black balls in

bag A and 2 white balls in bag B?

Link to comment
Share on other sites

18 answers to this question

Recommended Posts

  • 0

The minimum number of iteration will be 2 if we pick white balls from bag A and black balls from bag b.

The maximum number of iteration will be infinity if we pick a black ball from bag A and white ball from bag B except the first iteration.

But, in random choice, we might have to calculate the probability of choosing a white ball from bag A after the first iteration.

That's my first shot for this problem. if anyone can have a good idea to solve this problem. I will be happy to hear that.

Link to comment
Share on other sites

  • 0

You have two bags containing balls.

Bag A contains 2 white balls, bag B

contains 2 black balls. You randomly

pick a ball from bag A and place it

into bag B, then you randomly pick a

ball from bag B and place it into bag

A - this will be called an iteration.

Note that after any number of

iterations the two bags will each

contain 2 balls. What is the

expected number of iterations needed

to go from the initial configuration

of 2 white balls in bag A and 2

black balls in bag B to a final

configuration of 2 black balls in

bag A and 2 white balls in bag B?

Good problem.

Hopefully not as controversial as White Balls.

I just started school yesterday and won't have time to debate at passionately as I did in White Balls.

That is not to say your problem is less interesting.

One clarification: In an iteration, after you place a ball from bag A into bag B, and you are selecting from B to place back into A, do you include the ball you put into B in the same iteration, or do you randomly choose from one of the 2 balls initially in B at the start of the iteration?

I will assume for now that you randomly choose only from the balls that were in B at the start of the iteration when replenishing the number of balls in A.

The number of iterations until we have reached the final condition is a random variable that may range from 2 to infinity as NeoPark mentioned.

Let us call this variable X and define the probability mass function as f(k) = p[X=k].

Let us examine the possible states of the bags.

State S1: A has WW, B has BB

State S2: A has WB, B has WB where order does not matter

State S3: A has BB, B has WW

Once state 3 is reached, everything is over, and we have a value for X.

There are four physical possibilities for each iteration.

Case C1: Take W from A, take W from B

Case C2: Take W from A, take B from B

Case C3: Take B from A, take W from B

Case C4: Take B from A, take B from B.

P[C1|S1] = 0

P[C2|S1] = 1

P[C3|S1] = 0

P[C4|S1] = 0

Thus if state S1 is ever reached, no matter what happens, case C2 is the only possible iteration result.

When C2 is applied, the bags move into state S2.

P[C1|S2] - 0.25

P[C2|S2] - 0.25

P[C3|S2] - 0.25

P[C4|S2] - 0.25

In state S2, if C1 or C4 happens, then we remain in S2 and the iteration count increases.

If C2 happens, then the game is over.

If C3 happens, then we move to state S1 (which as we have seen is equivalent to remaining in S2, but to increase the iteration count by 2).

Let us make an equivalent problem that is easier to analyze.

The iteration count always begins at 1.

Case Z1: With probability 0.5, the count increases by 1, and the game continues.

Case Z2: With probability 0.25, the count increases by 2, and the game continues.

Case Z3: With probability 0.25, the count increases by 1, the game ends.

What is the expected value of iteration?

f(0) = 0, obviously, since the iteration count starts at 1.

f(1) = 0, since a turn must be made before the game ends

f(2) = P[Z3] = 0.25

f(3) = P[Z3]*P[Z1] = 0.125

f(4) = P[Z3]*P[Z2] + P[Z3]*P[Z1]*P[Z1] = 0.125

f(5) = P[Z3]*P[Z2]*P[Z1]*(2 choose 1) = 0.0625

f(6) = P[Z3]*P[Z2]*P[Z2] + P[Z3]*P[Z2]*P[Z1]*P[Z1]*(3 choose 1) + P[Z3]*P[Z1]*P[Z1]*P[Z1]*P[Z1]

To obtain the value for a general term f(k):

Find every possible way to add 2's and 1's to make k-2 and add the probabilities for each case, then multiply by P[Z3].

Example: k=7, k-2 = 5,

5 = 2 + 2 + 1

= 2 + 1 + 2

= 1 + 2 + 2

= 2 + 1 + 1 + 1

= 1 + 2 + 1 + 1

= 1 + 1 + 2 + 1

= 1 + 1 + 1 + 2

= 1 + 1 + 1 + 1 + 1

There are (3 choose 1) ways of combining 2,2, and 1 to make 5, so we can group those together.

Similarly, there are (4 choose 1) ways of combining 1,1,1, and 2 to make 5, so you can use these rules to get any particular f(k).

To get the expected value, a general expression is required for f(k).

The answer would then be sum_{k=0 to infinity} k*f(k)

The next step is to get a general expression for f(k)

Link to comment
Share on other sites

  • 0

For a general number m = k-2

We must find the number of ways to combine 2's and 1's to make m.

It is always possible to add 1's to get m.

This corresponds to P[Z1]^m.

If we add in a single 2 to the fray, we lose two 1's:

P[Z2]*P[Z1]^(m-2)

There are m-2 1's and one 2 giving m-1 spots.

The 2 can be in any spot, so there are are (m-1 choose 1) ways of doing what we want to do.

If we add y 2's into the fray we lose 2y 1's:

P[Z2]^y*P[Z1]^(m-2y)

There are m-2y + y digits or m-y digits, and y of them are 2's.

There are (m-y choose y) ways of doing what we want to do.

So a general term is

(m-y choose y) *P[Z2]^y * P[Z1]^(m-2y)

This is valid while m-2y >= 0

or y <= m/2

If m is odd, then y <= (m-1)/2

In either case: y <= floor(m/2)

f(k) = P[Z3]*sum_{y=0 to floor(m/2)}((m-y choose y) *P[Z2]^y * P[Z1]^(m-2y))

f(k) = (0.25)*sum_{y=0 to floor((k-2)/2)}((k-2-y choose y) * 0.25^y *0.5^(k-2-2y)

I go now to Wolfram Mathematica:

f[k_] := 0.25*Sum[binomial[k - 2 - y, y]*(0.25)^y*(0.5)^(k - 2 - 2 y), {y, 0, Floor[(k - 2)/2]}];

Sum[k*f[k], {k, 0, \[infinity]}]

This spits out a value of 6.

I suspect there is an easier way of doing this.

Edited by mmiguel1
Link to comment
Share on other sites

  • 0

Haven't been here for so long. I miss superprismatic's problems.

Let W stand for white balls, and B stands for black. We denote the content of a bag by ( color, color ), i.e. (W, W) for both whites. We represent the state of the problem at anytime as [ (color, color), (color, color) ], where the first parenthesis stands for bag A, and the second stands for

bag B. There are three states to this problem, which are [(W,W) , (B,B) ], [(B,W) , (W,B) ], and [(B,B) , (W,W) ].

We can easily compute the probability that any state will move to another during an iteration. For instance, from the beginning state [(W,W) , (B,B) ], for an iteration we can only move to the intermediate state [(B,W) , (W,B) ] with probability 2/3, and back to itself with probability 1/3. The directed graph for state movement is

post-14842-12700100766617.jpg

From here, we can compute the expected number of iterations it takes to move from [(W,W) , (B,B) ] to [(B,B) , (W,W) ]. That number is 9.

Link to comment
Share on other sites

  • 0

Haven't been here for so long. I miss superprismatic's problems.

Let W stand for white balls, and B stands for black. We denote the content of a bag by ( color, color ), i.e. (W, W) for both whites. We represent the state of the problem at anytime as [ (color, color), (color, color) ], where the first parenthesis stands for bag A, and the second stands for

bag B. There are three states to this problem, which are [(W,W) , (B,B) ], [(B,W) , (W,B) ], and [(B,B) , (W,W) ].

We can easily compute the probability that any state will move to another during an iteration. For instance, from the beginning state [(W,W) , (B,B) ], for an iteration we can only move to the intermediate state [(B,W) , (W,B) ] with probability 2/3, and back to itself with probability 1/3. The directed graph for state movement is

post-14842-12700100766617.jpg

From here, we can compute the expected number of iterations it takes to move from [(W,W) , (B,B) ] to [(B,B) , (W,W) ]. That number is 9.

Hello Bushindo, it has been a while. Superprismatic's Walter Penney puzzles were the best.

In this problem, you interpret that once you have placed a ball from A into B, when you select a ball in B to place in A, you may select any of the 3 in B.

Looking at the wording, that may be the correct interpretation.

I assumed each iteration meant swapping a ball between each bag.

"You randomly

pick a ball from bag A and place it

into bag B, then you randomly pick a

ball from bag B and place it into bag

A - this will be called an iteration."

because he uses the word "then", I think I originally had an incorrect interpretation.

May I ask how you calculated the expected value from your graph?

Edited by mmiguel1
Link to comment
Share on other sites

  • 0

Hello Bushindo, it has been a while. Superprismatic's Walter Penney puzzles were the best.

In this problem, you interpret that once you have placed a ball from A into B, when you select a ball in B to place in A, you may select any of the 3 in B.

Looking at the wording, that may be the correct interpretation.

I assumed each iteration meant swapping a ball between each bag.

"You randomly

pick a ball from bag A and place it

into bag B, then you randomly pick a

ball from bag B and place it into bag

A - this will be called an iteration."

because he uses the word "then", I think I originally had an incorrect interpretation.

May I ask how you calculated the expected value from your graph?

Hi mmiguel1, it really has been a while. Looks like you have been a very productive member on this board. I remember a couple of month ago you had your 1st post, and now you have almost the same post count as me.

Anyways, to compute the expected value from the graph, let's name the states. Let the first state [(W,W) , (B,B) ] be F, the intermediate state [(B,W) , (W,B) ] be I, and the last state [(B,B) , (W,W) ] be L. Let FL be the expected number of iterations from F to L, we can see that it can be decomposed into FL = FI + IL, where FI and IL are defined similarly to FL.

Examining the graph, we can write the following equation for FI

FI = 2/3 + (1/3)*( FI + 1 )

Solving for FI, we see that it takes an average of 3/2 iterations to go from first stage to intermediate stage. The equation for IL is

IL = (1/6)*(1) + (1/6)*( 1 + FI ) + (4/6)*( IL + 1 ).

We can now solve for IL, and compute FL = FI + IL.

Link to comment
Share on other sites

  • 0

Hi mmiguel1, it really has been a while. Looks like you have been a very productive member on this board. I remember a couple of month ago you had your 1st post, and now you have almost the same post count as me.

Anyways, to compute the expected value from the graph, let's name the states. Let the first state [(W,W) , (B,B) ] be F, the intermediate state [(B,W) , (W,B) ] be I, and the last state [(B,B) , (W,W) ] be L. Let FL be the expected number of iterations from F to L, we can see that it can be decomposed into FL = FI + IL, where FI and IL are defined similarly to FL.

Examining the graph, we can write the following equation for FI

FI = 2/3 + (1/3)*( FI + 1 )

Solving for FI, we see that it takes an average of 3/2 iterations to go from first stage to intermediate stage. The equation for IL is

IL = (1/6)*(1) + (1/6)*( 1 + FI ) + (4/6)*( IL + 1 ).

We can now solve for IL, and compute FL = FI + IL.

Yes, there are some fun problems on here. It's a nice site.

You have a very good, simple solution.

Not very important, but I think you missed an IL though

IL = (1/6)*(1) + (1/6)*( 1 + FI ) + (4/6)*( IL + 1 )

IL = (1/6)*(1) + (1/6)*( 1 + FI + IL) + (4/6)*( IL + 1 )

You obviously included it in your own calculation to get a value of 9.

When I try your method on the problem in the way I interpreted, I get the same value that I cranked out using probability rules and algebra.

Next time I come across a problem like this, I will choose your method.

Great job!

Edited by mmiguel1
Link to comment
Share on other sites

  • 0

Maybe you guys can help me with a stumbling block that always keeps me from trying these types of questions.

What exactly does it mean when it asks for the expected number of iterations to go from one state to another?

When you say the answer is 9 do you mean that after 9 iterations you will have had a 50% or greater chance to have seen BB,WW at least once or something?

Link to comment
Share on other sites

  • 0

One clarification: In an iteration, after you place a ball from bag A into bag B, and you are selecting from B to place back into A, do you include the ball you put into B in the same iteration, or do you randomly choose from one of the 2 balls initially in B at the start of the iteration?

Include the new ball which was placed into B in the same iteration.

Link to comment
Share on other sites

  • 0

Maybe you guys can help me with a stumbling block that always keeps me from trying these types of questions.

What exactly does it mean when it asks for the expected number of iterations to go from one state to another?

When you say the answer is 9 do you mean that after 9 iterations you will have had a 50% or greater chance to have seen BB,WW at least once or something?

The expected number of iterations is the sum of products of the probability of a path with its length. The sum is over all possible paths. The expected length is what we usually call the average length. I hope this helps.

Link to comment
Share on other sites

  • 0

Yes, there are some fun problems on here. It's a nice site.

You have a very good, simple solution.

Not very important, but I think you missed an IL though

IL = (1/6)*(1) + (1/6)*( 1 + FI ) + (4/6)*( IL + 1 )

IL = (1/6)*(1) + (1/6)*( 1 + FI + IL) + (4/6)*( IL + 1 )

You obviously included it in your own calculation to get a value of 9.

When I try your method on the problem in the way I interpreted, I get the same value that I cranked out using probability rules and algebra.

Next time I come across a problem like this, I will choose your method.

Great job!

Nice catch. I didn't see that mistake. Your approach is one that i didn't think of. I solved this problem because I learned about Markov random chains before, so the framework is there. I find your solution to be more impressive since it is a brilliant and original application of probability rules and algebra. Well done.

Link to comment
Share on other sites

  • 0

Maybe you guys can help me with a stumbling block that always keeps me from trying these types of questions.

What exactly does it mean when it asks for the expected number of iterations to go from one state to another?

When you say the answer is 9 do you mean that after 9 iterations you will have had a 50% or greater chance to have seen BB,WW at least once or something?

Here is how I interpret the expected value with regard to this problem.

Let's define a complete game as one where we start at the initial conditions of the game, and perform iterations until we reach the ending conditions.

The number of iterations is a random number it may be the answer that bushindo gave, or possibly 3 more than that or 3 less, or even 100 more (with very small probability).

Let's play one complete game and record the number of iterations.

We can then play another complete game and get a different number of iterations (which is random with the same probability distribution).

If we play N different complete games we can get an average iteration number by summing all of the random values we got and dividing by N.

Let's call the sum of all the values S.

Average = S/N

Here is the key step. Assuming that the iteration number has the same probability distribution in each complete game (i.e. the likelihood of outcomes don't change if you play the game again), then the limit of S/N as N approaches infinity (if we play an infinite number of complete games) will approach the expected value of the iteration number.

So think about the expected value as the average result if you perform the random process an infinite number of times.

This isn't strictly, the defining expression for the expected value, but it is true under the conditions I stated.

There is a theorem that states this called the Law of Large Numbers (funny name).

The definition of expected value for discrete variables is the weighted sum of each value the number can take where each value is weighted by the probability of getting that value.

Hope that helps.

Link to comment
Share on other sites

  • 0

Nice catch. I didn't see that mistake. Your approach is one that i didn't think of. I solved this problem because I learned about Markov random chains before, so the framework is there. I find your solution to be more impressive since it is a brilliant and original application of probability rules and algebra. Well done.

Thanks, I appreciate that, but the method I used cannot match the efficiency and usefulness of the one you used.

I read a quote today that this reminds me of.

"Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it."

-Alan J. Perlis, "Epigrams in Programming"

Link to comment
Share on other sites

  • 0

The path in Bushindo's state diagram from WW BB to BB WW multiplies out as 2/3 x 1/6 = 2/18 = 1/9.

The reciprocal is 9.

Or is that exactly what he said?

Does that give the expected value in general or is it just coincidence here?

Not to say it isn't true, it just seems too easy to believe to me, and I don't have time at this moment to work it out in a general case.

I will try later if no one else argues about it.

Link to comment
Share on other sites

  • 0

The path in Bushindo's state diagram from WW BB to BB WW multiplies out as 2/3 x 1/6 = 2/18 = 1/9.

The reciprocal is 9.

Or is that exactly what he said?

It seems to be a coincidence.

Suppose we modify the state movement mechanism so that BW BW no longer has a chance to go back to itself, and make P( WW BB | BW BW) = 5/6 instead of the original 1/6. We then have the following state diagram

post-14842-12701385835024.jpg

The transition from WW BB to BW BW and from BW BW to BB WW are still the same: (2/3) and (1/6). However, since we are a lot more likely to go back to our starting point, the expected number of iterations to completion is now 15 instead of 9.

Link to comment
Share on other sites

  • 0

It seems to be a coincidence.

Suppose we modify the state movement mechanism so that BW BW no longer has a chance to go back to itself, and make P( WW BB | BW BW) = 5/6 instead of the original 1/6. We then have the following state diagram

post-14842-12701385835024.jpg

The transition from WW BB to BW BW and from BW BW to BB WW are still the same: (2/3) and (1/6). However, since we are a lot more likely to go back to our starting point, the expected number of iterations to completion is now 15 instead of 9.

Makes sense to me.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...