Jump to content
BrainDen.com - Brain Teasers
  • 0


Prime
 Share

Question

  • Answers 63
  • Created
  • Last Reply

Top Posters For This Question

Recommended Posts

  • 0
Wondering what all these infinite series are about ... ;)

After 6 rolls of the die, 6 numbers will appear.

The average number of appearances of the numbers 1-6 is the same [for a fair die] and they total 6.

If 6 quantities are equal and total 6, each of them equals 1. <_<

Maybe I'm misreading this, but I think the problem with it is that it doesn't answer the question. What you're saying is that with 6 rolls, a 6 will appear, on average, once. But the question was, how long, on average, you'd have to wait for the first 6.
Link to comment
Share on other sites

  • 0
Maybe I'm misreading this, but I think the problem with it is that it doesn't answer the question. What you're saying is that with 6 rolls, a 6 will appear, on average, once. But the question was, how long, on average, you'd have to wait for the first 6.

They're not different questions.

To see this, ask the same question for 1, 2, 3, 4, and 5.

The answer, for a fair die, for each number is the same: one occurrence each for 6 rolls.

If there is on average one occurrence, it is the first occurrence.

Link to comment
Share on other sites

  • 0
They're not different questions.

To see this, ask the same question for 1, 2, 3, 4, and 5.

The answers, for a fair die, for each number are the same: one occurrence each for 6 rolls.

If there is on average one occurrence, it is the first occurrence.

To say that something happens on average once every six rolls is not exactly the same as saying you would have to wait on average 6 rolls for it to happen once. That does often tend to be the case, but it's one of those things that's so obvious it doesn't seem to require proof. Suppose you were picking numbers from 1 to 6 out of a bag (and not replacing them). If you picked 6 numbers out of the bag you'd expect each one to come up once. But what would be the average time it would take to pick a 6?
Link to comment
Share on other sites

  • 0
To say that something happens on average once every six rolls is not exactly the same as saying you would have to wait on average 6 rolls for it to happen once. That does often tend to be the case, but it's one of those things that's so obvious it doesn't seem to require proof. Suppose you were picking numbers from 1 to 6 out of a bag (and not replacing them). If you picked 6 numbers out of the bag you'd expect each one to come up once. But what would be the average time it would take to pick a 6?

I already tried to prove the same point to Bonanova. But somehow, he went ahead and won a gambling match against me.

Link to comment
Share on other sites

  • 0

Maybe I'm missing something here, but it seems pretty simple to me...

If the die is fair, then we expect equal numbers of each result (i.e. 1...6) after a "large" number of trials (N->infty). So, if N/6 sixes are rolled in N trials, the average interval between them is obviously 6.

Link to comment
Share on other sites

  • 0
...

2. It would not be a fair die.

Maybe I'm missing something here, but it seems pretty simple to me...

If the die is fair, then we expect equal numbers of each result (i.e. 1...6) after a "large" number of trials (N->infty). So, if N/6 sixes are rolled in N trials, the average interval between them is obviously 6.

But if the die is unfair, the average number of rolls to roll a numer is still 1/p.

For example, if probability for "6" in a loaded die was 1/2 and for each other number 1/10, then it would take on average 2 rolls to roll "6" and 10 rolls to roll any other number.

May be that example makes it more clear that your reasoning for proving the average 1/p, presumes beforehand that it is 1/p.

Link to comment
Share on other sites

  • 0
But if the die is unfair, the average number of rolls to roll a numer is still 1/p.

For example, if probability for "6" in a loaded die was 1/2 and for each other number 1/10, then it would take on average 2 rolls to roll "6" and 10 rolls to roll any other number.

May be that example makes it more clear that your reasoning for proving the average 1/p, presumes beforehand that it is 1/p.

I'm not disputing that. The probability can be anything. I just don't see why the proof needs to be so complicated.

Suppose the probability of rolling a six is p. So, after a "large" number of rolls (N), we will have rolled a six p*N times. It doesn't matter how the results are distributed, the intervals between sixes must add up to N, so the average interval is N/(pN) = 1/p. QED, no?

Link to comment
Share on other sites

  • 0
I'm not disputing that. The probability can be anything. I just don't see why the proof needs to be so complicated.

Suppose the probability of rolling a six is p. So, after a "large" number of rolls (N), we will have rolled a six p*N times. It doesn't matter how the results are distributed, the intervals between sixes must add up to N, so the average interval is N/(pN) = 1/p. QED, no?

For the case where you rolled p*N sixes out of N rolls -- yes that's the average. On the other hand, why do you discard all those cases when you rolled N sixes out of N rolls, or N-1, N-2, ... sixes. Should we not account for those when establishing the formula for the average number of rolls? Is it because their probabilities are a bit smaller? Or is it because we have decided up front what the average is and thus need not consider other "non-average" cases?

Link to comment
Share on other sites

  • 0
For the case where you rolled p*N sixes out of N rolls -- yes that's the average. On the other hand, why do you discard all those cases when you rolled N sixes out of N rolls, or N-1, N-2, ... sixes. Should we not account for those when establishing the formula for the average number of rolls? Is it because their probabilities are a bit smaller? Or is it because we have decided up front what the average is and thus need not consider other "non-average" cases?

Hunh?

I didn't discard them because they never happened. If the event didn't occur p*N times (remember, I said N is a LARGE number) then, by definition, its probability wasn't p.

O_o

Link to comment
Share on other sites

  • 0

Describe a single roll of a fair die.

[1] the likelihood of a particular number showing is the same as the likelihood of any other number showing.

[2] one of the numbers 1, 2, 3, 4, 5, 6 will show.

Roll a fair die 6 times.

ei is the expected number of appearances of the value i; i=1, 6.

From [1], e1 = e2 = e3 = e4 = e5 = e6

etot is the expected total number of appearances of any number, regardless of value.

From [2] and the fact the die is rolled 6 times, etot = 6.

Since 1-6 are the only numbers that can show, etot = sum {ei} = 6 ei = 6, for all values of i.

ei = 1 for all i.

e6 =1.

For six rolls of a fair die, the expected number of appearances of a 6 is 1. That appearance is its first appearance.

The expected number of rolls of a fair die that will produce the first appearance of a 6 is 6.

Link to comment
Share on other sites

  • 0
Hunh?

I didn't discard them because they never happened. If the event didn't occur p*N times (remember, I said N is a LARGE number) then, by definition, its probability wasn't p.

O_o

I never heard of such definition. If I understood you right, when we roll a die 6000 times and "6" did not come up exactly 1000 times, then we have proven that the probability to roll "6" is not exactly 1/6.

Link to comment
Share on other sites

  • 0
Hunh?

I didn't discard them because they never happened. If the event didn't occur p*N times (remember, I said N is a LARGE number) then, by definition, its probability wasn't p.

O_o

Okay, I think you're confusing probability with statistics...

Probability is what you use before you have complete knowledge, i.e. at a time before you roll the die or if you only know certain information about the results. Statistics is what you use to analyze the result set, when you have complete knowledge of the result set. You can use statistics to estimate probabilities to within some degree error, but you can't use it to PROVE that the probability is such and such.

It's not so obvious when talking about dice rolls, but a good example of the difference is the collapse of the wavefunction in quantum physics...

Before you observe the electron, it has a certain probability distribution, i.e. some probability of being in different locations in space. But after you observe the location of the electron, the wavefunction "collapses" into a delta function and the probability becomes 1 where it is. If you observe N electrons in similar circumstances, you can estimate the probability density function, but these results don't prove what the wavefunction is...you have to do that mathematically, with differential equations...my favorites ;P

Edit: In short, what I'm trying to say is that you can't use data sets, no matter how large, to prove probabilities, by definition.

Edited by Yoruichi-san
Link to comment
Share on other sites

  • 0
Okay, I think you're confusing probability with statistics...

Probability is what you use before you have complete knowledge, i.e. at a time before you roll the die or if you only know certain information about the results. Statistics is what you use to analyze the result set, when you have complete knowledge of the result set. You can use statistics to estimate probabilities to within some degree error, but you can't use it to PROVE that the probability is such and such.

It's not so obvious when talking about dice rolls, but a good example of the difference is the collapse of the wavefunction in quantum physics...

Before you observe the electron, it has a certain probability distribution, i.e. some probability of being in different locations in space. But after you observe the location of the electron, the wavefunction "collapses" into a delta function and the probability becomes 1 where it is. If you observe N electrons in similar circumstances, you can estimate the probability density function, but these results don't prove what the wavefunction is...you have to do that mathematically, with differential equations...my favorites ;P

Edit: In short, what I'm trying to say is that you can't use data sets, no matter how large, to prove probabilities, by definition.

Okay, I've thought of an example that will demonstrate this in a way that makes more sense to everyone who is not me ;P:

If I'm going to roll a fair die 500 times, I know that there is some probability that I will roll no 6's. That probability is (5/6)^500, which is small, but not 0. After I roll the die, if I end up rolling any 6's, then you would say the probability of me rolling no 6's would be 0. If I roll no 6's in 500 rolls, you would say the probability of me rolling no 6's is 1. So you can't use the data set to predict the probability of me rolling no 6's, because the only two values you can get are 0 and 1.

After the event happens and the results are known, probabilities all become 0 or 1. You can look at the data statistically to try to estimate probabilities, but it is not a proof.

Link to comment
Share on other sites

  • 0
Describe a single roll of a fair die.

[1] the likelihood of a particular number showing is the same as the likelihood of any other number showing.

[2] one of the numbers 1, 2, 3, 4, 5, 6 will show.

Roll a fair die 6 times.

ei is the expected number of appearances of the value i; i=1, 6.

From [1], e1 = e2 = e3 = e4 = e5 = e6

etot is the expected total number of appearances of any number, regardless of value.

From [2] and the fact the die is rolled 6 times, etot = 6.

Since 1-6 are the only numbers that can show, etot = sum {ei} = 6 ei = 6, for all values of i.

ei = 1 for all i.

e6 =1.

For six rolls of a fair die, the expected number of appearances of a 6 is 1. That appearance is its first appearance.

The expected number of rolls of a fair die that will produce the first appearance of a 6 is 6.

So the proof is based on a specific number of rolls -- 6 using expected value of the number of times for each individual number to appear.

1. Expected value is an average (mean) by its definition. It is calculated by adding probability for each individual occurence times the weight of that occurence. So if probability to roll "6" 1 time out of 6 rolls is p1, 2 times out of 6 -- p2 and so on. Then the expected value for the number of times "6" comes up is 1*p1 + 2*p2 + 3*p3 + 4*p4 + 5*p5 + 6*p6. Which should add up to 1 if we plugged actual values for the respective probabilities.

2. Instead of calculating the expected value from its definition, the proof asserts that there are total of 6 equal expected values and their sum should be equal to the number of rolls in the experiment -- 6.

3. Then relying on the equality of each expected value, the proof finds an individual one by dividing total number of rolls by the number of participants. 6/6 = 1.

4. Having thus found the average number of times for the "6" to come up over 6 rolls is one, the proof calculates the average number of rolls to produce an individual number is 6.

What I see here is that this proof takes the notion of the average defined elsewhere, makes manipulations with it over a specific number of experiments and arrives to a value of that average (expected value) for the chosen number of experiments. I have most doubts with respect to step 3 above as a proof.

Consider an example, where probabilities of separate events are not equal. Say there are total of 3 events with probabilities 1/2, 1/3, and 1/6. Then following the same line of reasoning, we'd have to assert that event one (with 1/2 probability) occured 3 times out of 6 because that's the proportion defined by its probability. But isn't that practically the same thing for which a proof was requested, and not an assertion?

We can split hair indefinetely. Still I feel that the proof Y-s gave (same as I had in mind) does not leave much room for argument or doubt. Your proof, based on a fair division of space between individuals over a given interval, still seems to me more as a common sense assertion of what average is and not so much as a formal proof.

Link to comment
Share on other sites

  • 0
Your proof, based on a fair division of space between individuals over a given interval, still seems to me more as a common sense assertion of what average is and not so much as a formal proof.

I accept without proof that you do not accept this as a proof.

If you want me accept that it is not a proof, I will.

Just point out where it's invalid.

Link to comment
Share on other sites

  • 0

Sorry, but I just have to point something out...

Bonanova's explanation makes a lot of sense, but I agree with Octopuppy that it seems to be going the opposite direction as Prime's original question...

...and I think the "infinite exponential" thread has aptly demonstrated that going forwards and backwards aren't necessarily the exact same thing...proving that 2 is equal to the convergence for the infinite exponential series of sqrt(2) does not necessarily mean the convergence of the series will always equal 2...it could also equal 4...;)

Link to comment
Share on other sites

  • 0
I accept without proof that you do not accept this as a proof.

If you want me accept that it is not a proof, I will.

Just point out where it's invalid.

I think a counterexample is called for. The example of pulling numbers out of a bag was a bit contrived and not a continuous process, so I've tried to come up with a better one. I just hope I did the maths right 'cos if this works out to be be exactly 6 I'll be so embarrassed. Probability is so slippery...

Here's your counterexample. Have fun!

Link to comment
Share on other sites

  • 0
I think a counterexample is called for. The example of pulling numbers out of a bag was a bit contrived and not a continuous process, so I've tried to come up with a better one. I just hope I did the maths right 'cos if this works out to be be exactly 6 I'll be so embarrassed. Probability is so slippery...

Here's your counterexample. Have fun!

Your counterexample seems to do a random 1-dim walk along the numbers 1-6.

Using a coin toss to move forward or backward, you'll stay put after 2 tosses with 50% probability. :o

What are you saying this is a counter example of?

Certainly not of a fair die, where any roll can give any result.

I'm clearly missing the point.

Link to comment
Share on other sites

  • 0
I never heard of such definition. If I understood you right, when we roll a die 6000 times and "6" did not come up exactly 1000 times, then we have proven that the probability to roll "6" is not exactly 1/6.

No, I never said that. I said if you roll a die a LARGE number of times you will have p*N outcomes of six (or any other number you choose).

I think we can all agree on the definition of probability as the "likelihood of an outcome," but we can also say it is the frequency with which an outcome occurs over time. So, if the number of sixes we roll with a die having probability p of rolling a six does not converge to p*N as N->infinity, then you must conclude either that N is not large enough, or that the probability of a six is not p.

Link to comment
Share on other sites

  • 0
No, I never said that. I said if you roll a die a LARGE number of times you will have p*N outcomes of six (or any other number you choose).

I think we can all agree on the definition of probability as the "likelihood of an outcome," but we can also say it is the frequency with which an outcome occurs over time. So, if the number of sixes we roll with a die having probability p of rolling a six does not converge to p*N as N->infinity, then you must conclude either that N is not large enough, or that the probability of a six is not p.

No.... I think all you can conclude is that given the fact that you have rolled the die N times, and gotten no sixes, then the probability that the die is fair (ie that P(6)=p) is... a small number. But it still may be true.

I believe Bayseian Probability is involved in some way.

Link to comment
Share on other sites

  • 0
Okay, I think you're confusing probability with statistics...

Actually, I am not. I am acutely aware that for any finite number of rolls, there is a probability, however small, that they will all be sixes. Or none will. Or every other one will. The likelihood of each of these can be computed. What is not possible however, is that if you continued rolling indefinitely, you would have any result other than N/6.

Probability is what you use before you have complete knowledge, i.e. at a time before you roll the die or if you only know certain information about the results. Statistics is what you use to analyze the result set, when you have complete knowledge of the result set. You can use statistics to estimate probabilities to within some degree error, but you can't use it to PROVE that the probability is such and such.

O_o

The OP clearly implies that the probability of rolling a six is 1/6. Without that knowledge, you cannot solve the problem. My proof is also not empirical (I use no statistics).

Before you observe the electron, it has a certain probability distribution, i.e. some probability of being in different locations in space. But after you observe the location of the electron, the wavefunction "collapses" into a delta function and the probability becomes 1 where it is. If you observe N electrons in similar circumstances, you can estimate the probability density function, but these results don't prove what the wavefunction is...you have to do that mathematically, with differential equations...my favorites ;P

So what? Before you roll a die, the probability of a particular outcome (say rolling a 6) is 1/6. After the event is observed, it is either 0 (not a six) or 1 (a six). No PDE's required :P

Link to comment
Share on other sites

  • 0
No.... I think all you can conclude is that given the fact that you have rolled the die N times, and gotten no sixes, then the probability that the die is fair (ie that P(6)=p) is... a small number. But it still may be true.

I believe Bayseian Probability is involved in some way.

There is no reference set, so I don't see what Bayes has to do with it. I'm no mathematician though, so maybe you mean something other than what I understand Bayesian probability to mean...

Link to comment
Share on other sites

  • 0
Your counterexample seems to do a random 1-dim walk along the numbers 1-6.

Using a coin toss to move forward or backward, you'll stay put after 2 tosses with 50% probability. :o

What are you saying this is a counter example of?

Certainly not of a fair die, where any roll can give any result.

I'm clearly missing the point.

Naturally it's not a fair die. Of course I'm monkeying with the probabilities.

The point is that, using 6 turns of my magic wheel (any 6, mind you), the expected number of appearances of any given number would be, of course, 1 (since there are 6 numbers and each is as likely as any other).

Therefore, using this bit of logic, one could only conclude that the expected number of turns to produce a 6 would also be 6.

But is it??? :rolleyes:

Link to comment
Share on other sites

  • 0
Actually, I am not. I am acutely aware that for any finite number of rolls, there is a probability, however small, that they will all be sixes. Or none will. Or every other one will. The likelihood of each of these can be computed. What is not possible however, is that if you continued rolling indefinitely, you would have any result other than N/6.
On which roll does it cease to be possible?
Link to comment
Share on other sites

  • 0
On which roll does it cease to be possible?

After a large number of them. ;-P

It is up to statisticians to determine, with one of their chi-squared tests (or whatever), whether some finite number of trials adequately demonstrates if an assumed probability is likely to be accurate. I, however, am not constrained by such mundane practical concerns. In fact, I need not roll a single die, because the theoretical probability is already known (it is implied in the OP). I just close my eyes and watch the numbers stream out, secure in the knowledge that as the number of trials increases, the relative difference between the expectation and the mean approaches zero. How quickly? Who cares? Theoretically, I have all the time in the universe, and then some.

All kidding aside, this is known as the Law of Large Numbers. It is worth noting that the absolute deviation from the mean can increase as the number of trials increases (in fact it does increase continually). This is of no concern however, because we are only interested in the relative deviation.

There is also a Law of Truly Large Numbers, which states that for a large enough sample population, anything, no matter how outrageously improbable, will eventually happen (such as rolling a large number of sixes in a row). This does not concern me either, because such regions, by definition, occupy a very small portion of the probability space in the long run. Furthermore, regions having very high frequency of a particular event occur with equal probability as regions of very low frequency, so, again in the long run, such unlikely events do not affect the mean.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...