Jump to content
BrainDen.com - Brain Teasers
  • 0


Prime
 Share

Question

  • Answers 63
  • Created
  • Last Reply

Top Posters For This Question

Recommended Posts

  • 0

I don't want to go about it ad infinitum.

Let me just observe, the the statement of the problem (OP) basically requested to prove mathematically that the average number of trials to produce an event of probability p is 1/p.

Youruichi-san has done exactly that. I also offered a "simpler" questionalble proof later on.

I feel that Bonanova's "simpler proof" is a convincing demonstration, but comes short of a formal proof standard for the reasons I have explained in the previous posts. (I pointed out specifically where simpler proof uses presumed notion of average to derive the formula thereof.)

Nothing that has been posted here since has moved me to reconsider my position.

Link to comment
Share on other sites

  • 0
No, I never said that. I said if you roll a die a LARGE number of times you will have p*N outcomes of six (or any other number you choose).

I think we can all agree on the definition of probability as the "likelihood of an outcome," but we can also say it is the frequency with which an outcome occurs over time. So, if the number of sixes we roll with a die having probability p of rolling a six does not converge to p*N as N->infinity, then you must conclude either that N is not large enough, or that the probability of a six is not p.

No, we can't. The frequency with which an outcome occurs over time is related to the probability, i.e. the probability is a good predictor of that, but they are not the same, there's always some probability that the probability and the frequency of an outcome is different. The frequency with which an outcome occurs falls under statistics. Your discussion about chi-squared charts and statisticians and Large Numbers is true, but also falls under statistics.

I think you have some good ideas and a lot of knowledge, but the fact is that you just cannot prove probabilities inductively from data sets, no matter how LARGE. You can estimate to a high level of confidence and a small error, but it still does not prove the probability is equal to that.

And my PDEs were talking about electron wavefunctions ;P.

Edit: Probability talks about the chance of something occurring, meaning that there are different possible outcomes and you don't know which one it is. After the data set is known, the probabilities all become 0 or 1, you know what occurred and what didn't occur, there's no chance for it to be any different, and then you use the frequency of occurrence instead of the probability. I tried (unsuccessfully) to bring in the example of the collapse of the wavefunction because I think it demonstrates this fact the best. A complex wavefunction collapses into a delta function once the event is observed, i.e. the result is known, and gives no information about the actual wavefunction.

Edited by Yoruichi-san
Link to comment
Share on other sites

  • 0
O_o

The OP clearly implies that the probability of rolling a six is 1/6. Without that knowledge, you cannot solve the problem. My proof is also not empirical (I use no statistics).

O_o o_O

You misread my statement, I never said probability is not having knowledge of probability, I said that probability is not having knowledge of complete results. Analyzing the result set is statistics.

Link to comment
Share on other sites

  • 0
All kidding aside, this is known as the Law of Large Numbers. It is worth noting that the absolute deviation from the mean can increase as the number of trials increases (in fact it does increase continually). This is of no concern however, because we are only interested in the relative deviation.

There is also a Law of Truly Large Numbers, which states that for a large enough sample population, anything, no matter how outrageously improbable, will eventually happen (such as rolling a large number of sixes in a row). This does not concern me either, because such regions, by definition, occupy a very small portion of the probability space in the long run. Furthermore, regions having very high frequency of a particular event occur with equal probability as regions of very low frequency, so, again in the long run, such unlikely events do not affect the mean.

Who enforces these laws?

Actually I think you're misinterpreting them. The Law of Large Numbers does not state that increasing large samples will approach their expected mean value, only that they tend to. In other words, the larger the sample, the less likely it becomes that it will not approach the expected mean. But it is always possible that it will not. You can roll a fair die 10 times and get a six every time. You can roll it 100 times and get a 6 every time. There is no limit to the number of times you can do it. It just gets less likely the more you do it.

The same goes for the Law of Truly Large Numbers. It does not state what will happen, only what is likely.

Link to comment
Share on other sites

  • 0
The frequency with which an outcome occurs over time is related to the probability, i.e. the probability is a good predictor of that, but they are not the same, there's always some probability that the probability and the frequency of an outcome is different. The frequency with which an outcome occurs falls under statistics.

I don't pretend to be on the cutting edge of probability theory, but I do have some experience with it in the not too distant past. Fortunately, there is no baseball tonight, and I was able to find my old textbook, in which I found the following passages:

"There are several common theories of probability. In the frequency theory of probability, the probability of an event is the limit of the frequency with which it occurs in repeated, independent trials under the same circumstances.(...) According to the subjective theory, probability is a measure of how strongly we believe an event will occur."

I would like to direct your attention to the highlighted word. It clearly states that they are equal in the limit as n->infty.

Probability talks about the chance of something occurring, meaning that there are different possible outcomes and you don't know which one it is.

That is the interpretation of the subjective theory.

The Law of Large Numbers does not state that increasing large samples will approach their expected mean value, only that they tend to.

Actually, it says neither: "(I)n repeated, independent trials, all having the same probability p of an event, the chance that the relative number of events differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials N increases."

[T]he fact is that you just cannot prove probabilities inductively from data sets, no matter how LARGE.

Perhaps I have not been clear enough. What I have been trying to say, without being too mathy, and (AHEM!) with no data, is the following:

Because the underlying assumption is that a '6' occurs with constant probability of p = 1/6, there is no reason to expect the first interval to be different from any other (i.e. the system is time invariant). Therefore the expected interval between sixes is also constant, and equal to the average interval in the limit as the frequency approaches the probability. Namely lim(N->infty)N/(pN) = 1/p = 6. I still don't see what is wrong with that interpretation.

Finally, I have never objected to Y-san's original solution (or whoever's it was). I was merely trying to show another way of getting there that seemed, to me, much more expedient. Unfortunately, it has not turned out that way :(. I also have no problem with anybody who chooses not to believe in the frequency theory, or who prefers a different one, but please provide justification for doing so.

Link to comment
Share on other sites

  • 0
I don't pretend to be on the cutting edge of probability theory, but I do have some experience with it in the not too distant past. Fortunately, there is no baseball tonight, and I was able to find my old textbook, in which I found the following passages:

"There are several common theories of probability. In the frequency theory of probability, the probability of an event is the limit of the frequency with which it occurs in repeated, independent trials under the same circumstances.(...) According to the subjective theory, probability is a measure of how strongly we believe an event will occur."

I would like to direct your attention to the highlighted word. It clearly states that they are equal in the limit as n->infty.

That is the interpretation of the subjective theory.

Actually, it says neither: "(I)n repeated, independent trials, all having the same probability p of an event, the chance that the relative number of events differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials N increases."

Perhaps I have not been clear enough. What I have been trying to say, without being too mathy, and (AHEM!) with no data, is the following:

Because the underlying assumption is that a '6' occurs with constant probability of p = 1/6, there is no reason to expect the first interval to be different from any other (i.e. the system is time invariant). Therefore the expected interval between sixes is also constant, and equal to the average interval in the limit as the frequency approaches the probability. Namely lim(N->infty)N/(pN) = 1/p = 6. I still don't see what is wrong with that interpretation.

Finally, I have never objected to Y-san's original solution (or whoever's it was). I was merely trying to show another way of getting there that seemed, to me, much more expedient. Unfortunately, it has not turned out that way :( . I also have no problem with anybody who chooses not to believe in the frequency theory, or who prefers a different one, but please provide justification for doing so.

Thank you, I do appreciate your viewpoint, like you said, it's a very good "way of getting there" from a thinking perspective, but mathematically, it does not prove the answer. I would "give you kudos" for your answer but not your proof ;P.

And thank you for showing that quote, however from a logical perspective, I have to disagree with your interpretation:

"There are several common theories of probability. In the frequency theory of probability, the probability of an event is the limit of the frequency with which it occurs in repeated, independent trials under the same circumstances.(...) According to the subjective theory, probability is a measure of how strongly we believe an event will occur."

From a logical perspective, it says that the frequency in repeated trials will approach the probability, i.e., as I said before, that the probability is a good predictor of the frequency. But it does not say frequency of an outcome is probability. We all know that A->B does not mean B->A.

Your approach involves looking at a data set after a large number of trials, and saying that the frequency will approach 1/6, which I do not disagree with. But I disagree that this proves the probability is 1/6. This may verify your prediction that the probability is 1/6 to within a certain degree error, which brings me to my second point, your other quote:

Actually, it says neither: "(I)n repeated, independent trials, all having the same probability p of an event, the chance that the relative number of events differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials N increases."

Like you did with some of my earlier posts, you highlighted the parts you thought were important and dismissed other parts of the same sentence. Looking at the part in red, it is apparent that the quote does not say that the difference b/w probability and frequency converges to 0, but that the probability that this difference is greater than "a fixed positive amount, e>0" converges to 0. I.e. statistically, you predict that p is within some interval [p-e,p+e] and your confidence level approaches 100% as N->infinity, but not that e->0, i.e. not that the frequency approaches p.

*For those who may not know as much about statistics, just want to say: Using statistical analysis, you can predict a factor (such as probability) to a degree error (e in this case), and with a certain % confidence. I.e. if I draw colored balls out of a bag, I can say "the percentage of blue balls is 15+/-.1, at a 95% level of confidence", or equivalently, "the percent of blue balls is within the interval [14.9,15.1], at a 95% level of confidence".

So you have shown that statistically, you can get your level of confidence to 100% as N->0, but you can't get the error to 0 at the same time.

I hope I'm not coming off as too harsh. It's just that I have had way more probability and statistics than I ever wanted, and so I can't help myself from responding when I think there is a misunderstanding. ;P

Link to comment
Share on other sites

  • 0
Actually, it says neither: "(I)n repeated, independent trials, all having the same probability p of an event, the chance that the relative number of events differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials N increases."
And this is somehow different from "The Law of Large Numbers does not state that increasing large samples will approach their expected mean value, only that they tend to"? Different words, same meaning.

Perhaps your confusion is due to that statement "converges to zero". That doesn't mean it ever becomes equal to zero. In fact it typically does not, which is why, however large the sample, it remains possible for results to deviate significantly from the expected mean ("expected" in this case meaning based on its probability).

Link to comment
Share on other sites

  • 0

You still seem to misunderstand me. I will try one more time, and then I promise to shut up.

First, we agree that the LLN does not prove anything. That was actually what I meant when I stated that "it said neither," and by highlighting the word "chance." That however, is not the issue.

The issue is, that you appear to think I am extrapolating the trend observed in some sort of finite data set (real or imagined), which I am not. The frequency theory does not attempt to prove itself through statistics. It does not state that the frequency approximates the probability. It does not say that the two are related, that it is the limit of a binomial distribution, or that the probability can be predicted within a certain level of confidence if the frequency of an event is known. It also does not attempt to prove equality of probability and frequency based on other axioms. It is the basis through which the probability of an event is formally defined.

I'll be the first to admit it has weaknesses, and of course the theory itself is based on observations in nature. But what useful theory is not? Your position, on the other hand, can be summarized by the statement that the frequency theory is false.

I will sign off by poking you in the eye, with apologies, but I just couldn't resist... :D

You stated earlier that once the electron is observed, the probability distribution of its position collapses to the Dirac function. This is incorrect. The best you can say is that its position is known with higher probability. How much higher depends on the level of confidence you have in your observation. XP

Link to comment
Share on other sites

  • 0
You stated earlier that once the electron is observed, the probability distribution of its position collapses to the Dirac function. This is incorrect. The best you can say is that its position is known with higher probability. How much higher depends on the level of confidence you have in your observation. XP

Would Prof. Heisenberg care to comment at this point? ;)

Link to comment
Share on other sites

  • 0
You still seem to misunderstand me. I will try one more time, and then I promise to shut up.

First, we agree that the LLN does not prove anything. That was actually what I meant when I stated that "it said neither," and by highlighting the word "chance." That however, is not the issue.

The issue is, that you appear to think I am extrapolating the trend observed in some sort of finite data set (real or imagined), which I am not. The frequency theory does not attempt to prove itself through statistics. It does not state that the frequency approximates the probability. It does not say that the two are related, that it is the limit of a binomial distribution, or that the probability can be predicted within a certain level of confidence if the frequency of an event is known. It also does not attempt to prove equality of probability and frequency based on other axioms. It is the basis through which the probability of an event is formally defined.

I'll be the first to admit it has weaknesses, and of course the theory itself is based on observations in nature. But what useful theory is not? Your position, on the other hand, can be summarized by the statement that the frequency theory is false.

I will sign off by poking you in the eye, with apologies, but I just couldn't resist... :D

You stated earlier that once the electron is observed, the probability distribution of its position collapses to the Dirac function. This is incorrect. The best you can say is that its position is known with higher probability. How much higher depends on the level of confidence you have in your observation. XP

Yeah, and that higher probability is 1. XP

You may be getting confused about the Uncertainty Principle, which says you can't know the exact position and velocity, but you can know position if you give up trying to determine velocity. It's a trade-off, just like margin of error and confidence level, as I pointed out in my earlier post.

And my position is summarized by this: your interpretation of frequency theory is incorrect. I already pointed out based on your own quotes from your book that you are confusing confidence level with error, and that you are trying to say A->B means B->A, which is not true.

I don't really care about your poke in the eye, as I have landed several hard blows in your other areas ;P.

(I don't mean to be offensive, just responding in the spirit of fun and debate in the same way you responded to me ;))

Link to comment
Share on other sites

  • 0
Yeah, and that higher probability is 1. XP

You may be getting confused about the Uncertainty Principle, which says you can't know the exact position and velocity, but you can know position if you give up trying to determine velocity. It's a trade-off, just like margin of error and confidence level, as I pointed out in my earlier post.

And my position is summarized by this: your interpretation of frequency theory is incorrect. I already pointed out based on your own quotes from your book that you are confusing confidence level with error, and that you are trying to say A->B means B->A, which is not true.

I don't really care about your poke in the eye, as I have landed several hard blows in your other areas ;P.

(I don't mean to be offensive, just responding in the spirit of fun and debate in the same way you responded to me ;) )

Nothing you have said can be taken as offensive, nor do I mean any offense.

I know I said I would zip it, but I'm on a different topic, now. About your electron, I'm sure you will correct me if I'm wrong, but if you wanted to know its position with probability 1, wouldn't you need to observe it with a particle of wavelength zero?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...