Jump to content
BrainDen.com - Brain Teasers

Rainman

Members
  • Posts

    178
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by Rainman

  1. It's a theorem in complex analysis, not in real analysis. Do you know about complex numbers?
  2. If P(H)=0.6, the probability of a 10-H string is 0.610 ~ 0.006, whereas the probability of a 10-T string is 0.410 ~ 0.0001. So 10-H strings should be much more frequent than 10-T strings.
  3. That sucks, I don't have many friends myself. Aaryan's point, though, is that asking us to solve it for you is cheating. But since you got the hint of using a 9x6 matrix, I'll at least do you the favor of putting the message in a 9x6 matrix. WH3TEE HERWDN A2OO7T TSO6AY 0QTHN9 IU4UDF SAON8I 1RFDTV TE5RWE Hopefully you can see it now. Good luck with the competition!
  4. There are many, many more ways though.
  5. Just a side note: it is often falsely assumed that 100% probability is equivalent with absolute certainty. This is one of those cases; even if the coin is perfectly fair, it's possible that you will never reach H=T even if you keep flipping forever. The probability is 0, but it's possible. Any given infinite sequence is possible, for example flipping heads every time. If we look at it backwards, suppose you did flip the coin infinitely many times and recorded your results. Whatever sequence you got, the probability of that particular sequence was 0, but it did happen. Suppose you generate a random real number between 0 and 10. The probability of generating pi is 0, but it is within the realm of possibilities. It's totally impossible to generate -pi. In general, suppose we have an event A. If A is absolutely certain to happen, then P(A)=1. But the converse is not true. If P(A)=1, we can't conclude that A is absolutely certain to happen. There is a definition in probability theory, that if P(A)=1, then A happens almost surely.
  6. I like the reasoning, but the proof by example uses a slanted interval. The true Median of 1/6 is not in the middle of the interval 0.1666 to 0.16667. What happens if you use a truly median interval, e.g., from 9/60 to 11/60? It seems to be leading to the opposite conclusion. I chose a slanted interval because it's shorter to write out. It has no bearing on the conclusion. If you insist on a median interval, we can use the interval 99999/600000<x<100001/600000 instead. The probability of the relative frequencies being in this interval approaches 1. If 99999/600000<x<100001/600000, then we can certainly conclude that 0.1666<x<0.1667, which I have already proven to imply that the payoff is less than 1 and tends to 0. http://en.wikipedia.org/wiki/Law_of_large_numbers It is a proven theorem.
  7. Done that. Twice. If the ratios are all within the interval 0.1666<x<0.1667 then the resulting payoff is always less than 1, and it tends to zero as N tends to infinity. An upper bound for the payoff is: Payoff < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N < 1. In plain words: over time, one hundred percent of the variations will be close enough to the median variation, that their payoff is zero. To understand why this matters, do not concern yourself with the nominal values of the stakes and payoffs, but rather with their effect on your bankroll. If your bankroll is $1, and you bet $1 and hit 0.7, your bankroll is reduced from 1 to 0.7, a change by the factor 0.7. Here the nominal change equals the bankroll change. However, if your bankroll had been $2 instead, and you bet $1 and hit 0.7, your bankroll would be reduced from 2 to 1.7, a change by the factor 0.85. Because you only bet half your bankroll, the change is not as dramatic. If you bet half your bankroll instead of your entire bankroll, the payoffs would result in a change of 0.85, 0.9, 0.95, 1.05, 1.1, or 1.25. The geometrical mean of these changes is (0.85*0.9*0.95*1.05*1.1*1.25)1/6 ~ 1.008, which means you now have a positive expected effect on your bankroll. If you bet half your bankroll every time, you can expect it to grow exponentially (1.008N). Now we might ask, when is the inflection point between a good game and a bad game? How much of your bankroll can you bet before you expect to lose money? Well, if you bet the fraction x of your bankroll, the payoffs would be 0.7x, 0.8x, 0.9x, 1.1x, 1.2x, and 1.5x. Adding the 1-x you have remaining on the side, the change in bankroll would be 1-0.3x, 1-0.2x, 1-0.1x, 1+0.1x, 1+0.2x, or 1+0.5x. The geometrical mean would be GM = ((1-0.3x)(1-0.2x)(1-0.1x)(1+0.1x)(1+0.2x)(1+0.5x))1/6, which can be solved for x, at GM=1. The solution in the interval 0<x<1 is a bit over 0.989, which means the game is good for you as long as you bet between 0% and 98,9% of your bankroll. But why am I not concerned with the nominal values of the winnings, but rather their effect on my bankroll? Consider this: if you had a $1,000,000 total fortune (this includes your house and valuables), would you risk it all for a 50% chance to triple it? Would you risk it all for an 80% chance to double it? Would you risk $999,000 of it for a 60% chance to double it? Very few people would be silly enough to take those risks, even though the EV is very good. The true difference between having one million and being dead broke is a lot more dramatic than the difference between having two millions and having one million, but the nominal difference is the same.
  8. Prime, if you have a bankroll of $20 and can run method 2 twenty times, then of course you're going to win. No one here is arguing against that. But method 2 in OP does not say anything about you being able to reset your stake at $1 whenever you feel like it. You get one shot. One. Shot. For all purposes your bankroll is $1. If you lose you lose. The most likely result is you will lose. Your simulations all show that. If you run 10 simulations of 1000 rolls each and 6 of them are losing, it means 6/10 people will lose. The other four people will win. As N grows larger the probability of winning approaches 0, as I have proven mathematically. Which means, if all 7,000,000,000 people in the world run method 2 for long enough, the probability approaches 1 that they will all lose.
  9. +1 Precisely. For $1 you'd be buying $5 worth of chances (provided the pot can't be split between several winners.) That's how I play lottery. In gambling EV is what counts. I think, you've misunderstood what I was asking to prove. With N rolls there are 6N total variations. There are TW variations ending in a payoff P > 1 and TL variations ending with the final payoff P < 1. TW+TL=6N. Find the limit of TW/6N as N tends to infinity. I agree that "Expected Outcome" tends to zero as the number of rolls N tends to infinity. I just don't see it all that relevant to winning in this game. If this game was offered in casinos, I would have most certainly won millions in a matter of days starting with a bankroll of $100 or so. And I wouldn't let the entire $100 ride in just one sitting. Money management still counts. However, exponential accumulation of winnings should be the predominant scheme to win big. Another sure thing is -- the Casino offering that game would go bust very quickly. I proved that the probability of a randomly selected variation ending in a payoff P>1 approaches 0 as N approaches infinity. TW/6N is the probability of a random variation ending in a payoff P>1. Hence I have proven that TW/6N approaches 0 as N approaches infinity. Simultaneously, I also proved that this is true not just for payoffs P>1, but for payoffs P>E, as long as E>0. For example, if we let Tp be the number of variations that give at least a penny back, then Tp/6N also approaches 0 as N approaches infinity. I will try to explain my proof step by step: Given a randomly created variation of N rolls, we can count the relative frequency of each result. Let f1 be the relative frequency of 0.7-rolls, which is defined as the number of 0.7-rolls divided by the total number of rolls N. It follows that the number of 0.7-rolls is f1*N. Similarly, let f2 be the relative frequency of 0.8-rolls, f3 the relative frequency of 0.9-rolls... and so on until f6, being the relative frequency of 1.5-rolls. Now, consider the interval 0.1666<x<0.1667. Clearly 1/6 is an internal point in this interval. Because the expected values of f1 through f6 all equal 1/6, the law of large numbers guarantees the following as N approaches infinity: the probability approaches 1 that f1 through f6 will all be within this interval. A direct implication* of f1 through f6 all being within this interval is that the payoff is less than 1. Replacing the clause "f1 through f6 will all be within this interval" with its direct implication "the payoff is less than 1" in the bold-face text above, we get that the probability approaches 1 that the payoff is less than 1**. If the probability of A approaches 1, then the probability of (not A) approaches 0. So the probability approaches 0 that the payoff is not less than 1. Consequently the probability that you get a payoff P>1 approaches 0. Since all variations are equally likely, the probability that you get a payoff P>1 is equal to the number of such variations divided by the total number of variations, i.e. TW/6N. So replacing "the probability that you get a payoff P>1" with its equal "TW/6N", we get the sought result: TW/6N approaches 0. *I proved this implication in my original proof (the inequality part), but I will try to explain that proof as well. So what I'm trying to prove is the implication "if f1 through f6 are all within the interval 0.1666<x<0.1667, then the payoff is less than 1". We can write the payoff as P = 0.7A*0.8B*0.9C*1.1D*1.2E*1.5F, where A is the number of 0.7-rolls, B is the number of 0.8-rolls, etc. But as already stated, the number of 0.7-rolls is f1*N, the number of 0.8-rolls is f2*N, etc. So replacing A through F with f1*N through f6*N, and extracting the common exponent N, we get the payoff as P = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N. This function will increase as f1,f2, and f3 decrease (because they are exponents for base numbers less than 1), and it will increase as f4,f5, and f6 increase (because they are exponents for base numbers greater than 1). So to maximize the payoff, we should minimize f1 through f3, and maximize f4 through f6. Setting f1 through f3 as 0.1666 and f4 through f6 as 0.1667, we get an upper bound for the payoff: P < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N, which can be verified by calculator to imply P < 0.9998N, which in turn implies P < 1. **If B is a direct implication of A, then P(B) is greater than or equal to P(A). Since P(A) approaches 1, and P(A) <= P(B) <= 1, it follows that P(B) approaches 1 as well. So we can replace the original clause with its direct implication.
  10. I'm saying that if $1 was my entire bankroll, I definitely wouldn't play that lottery. Sure, I could get lucky and win the big jackpot, and sure the EV is positive, but protecting your bankroll is just as important as finding favorable games. You can win in the short run playing unfavorable games as well, but in the long run you are going to lose. And if you don't protect your bankroll, you're going to lose in the long run. If you're a recreational gambler with a steady job, and you spend a small part of your weekly salary on some weekend bets, hoping to cash in big so you can have a nice long holiday or early retirement, then bankroll management is not really an issue for you. Your bankroll is refilled on a regular basis, and you're not risking anything. However, I believe the appropriate definition of recreational gambling is that you're doing it for fun. If you're doing it to win money, if you're looking to make your living as a gambler, constantly looking for bets with a small edge to grind, knowing that if you do it right your profits will increase exponentially over time, until one day you're rich enough to retire, then you simply must know how to protect your bankroll. Otherwise you'll end up an addict, putting yourself in debt because you "know you can beat the game, you've just been unlucky so far".
  11. Once again, those who don't believe it's a winning game, can play the Casino side. I agree that the EV is (31/30)N for N rolls. But the problem is, as N grows larger, that value will become distributed over a relatively smaller set of variations. For a large enough N, everybody in the world could play the game and everyone would lose. You asked if the ratio of wins to overall variations approaches 0 as N approaches infinity. It does, and I will prove it as you asked. The proof is at the end of this post. As for your simulations with 10,000 rolls each: for some perspective, consider what the EV is for 10,000 rolls. (31/30)10000 ~ 3*10142. The expected outcome is about 3*10-2. Your largest result was about 9*1011, followed by 4*106, 1*102, 2*101, and six results of something times 10-something small. Now imagine if you were the expected value, sitting around 142 at the logarithmic scale, looking down at those ten simulations, knowing that the expected outcome is sitting around -2. You might think "hey, those ten results are all packed pretty neatly around the expected outcome, while I'm all alone up here". It should become clear that it's unreasonable to expect to get the expected value. If you ran 7 billion more simulations, one for each person in the world, I can't imagine that a single one of us could be expected to win nearly that much. 7 billion simulations is still a very small sample from the 610000 possible variations. Also, if you had combined those ten simulations into one 100000-roll game, you would have lost that game (based on your remark that the losses gave way below a penny in return). The statement that you practically can't lose with method 2 is just wrong. In the long run, you practically can't win (proof still coming up). The only reason you can still win after 10000 rolls is that the expected outcome per roll is so close to 1. The expected outcome for six rolls is 0.99792, which means 0.997921/6 ~ 0.99965 per roll. Theorem: the probability of winning (or even getting at least a fixed non-zero return) approaches 0 as the number of rolls approaches infinity. Proof: There are six equally likely outcomes of one roll, let's denote them a1 through a6 (a1 being 0.7 and ascending to a6 being 1.5). Let fi denote the relative frequency of the outcome ai during a sequence of rolls. For example, if we roll 12 times and get 1*a1, 3*a2, 0*a3, 1*a4, 2*a5, and 5*a6, then f1 = 1/12, f2 = 3/12, f3 = 0/12, f4 = 1/12, f5 = 2/12, and f6 = 5/12. The exact number of outcomes ai is the relative frequency fi times N. The final outcome of our game will be G = 0.7f1N*0.8f2N*0.9f3N*1.1f4N*1.2f5N*1.5f6N = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N. The expected value of each fi is 1/6. By the law of large numbers, P(0.1666<fi<0.1667, for all i) approaches 1 as N approaches infinity. But if 0.1666<fi<0.1667, then G = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N. So the probability approaches 1 that your outcome is smaller than a value which approaches 0 as N approaches infinity. Conversely, for any given E>0, the probability approaches 0 that your outcome is at least E.
  12. Do you need an infinite stake to play Method 1, as you would in a Martingale scenario? Can't you feed in dollar bills until your stake exceeds a certain amount, then play with house money? After your stake reaches $2 you're basically starting over, with a free dollar to use. What I meant is, if your bankroll is finite, there is a risk that you will run out of money and be unable to continue with method 1. You could hit a freak streak of consecutive losses. The probability is non-zero so it has to be accounted for, when calculating expected outcome in the long run. With method 1, the expected outcome for N games equals N*31/30 only because your bankroll is assumed to be infinite. With a finite bankroll B, the expected outcome for N games would equal N*31/30 for small values of N, but once B-0.3N<1 the expected outcome would drop below N*31/30. Trying to calculate it exactly with respect to both B and N would be way too much for my brain, and I don't think the formula would be pretty. Your point is valid though, we might still have a positive expected outcome for method 1, with a large enough bankroll. It could be tested with computer simulations, but I don't have the programming skills to do those. Besides, the expected outcome is at most N*31/30, which is dwarfed by the ~1.049N/6 you would get by betting half your bankroll each time.
  13. Yes, there it is -- Expected Value vs. Expected Outcome. I fancy myself as an avid gambler. And in practice I would choose the mix of the two methods, as Rainman has suggested here. (Playing infinite number of times is rather impractical.) I have run an experiment playing 422 consecutive rolls 10 times with the following results: 4 times I had less than $0.01 left of my initial $1 bankroll. 3 times I had between $0.02 and $0.35 left. 3 times I won with the largest win of $83.62. That left me very much ahead (more than $80, while putting at risk just $10). The new question here is, what is the optimal string of consecutive rolls? 3 times I It seems your post was cut off unfinished. Expected outcome is the true value for any gambler who seeks to eliminate the "gambling" part. The probability is 1 that your own outcome approaches the expected outcome as the number of games approaches infinity. So in the long run, you are practically guaranteed to win if your expected outcome is positive. The same is not true for expected value. Expected value is too influenced by the extremely high payoffs in the extremely unlikely variations. In this case, if you play for example 100 times, the extremely lucky variation where you hit 1.5 every time would yield a net payoff of roughly +$406,561,177,535,215,236. On the other hand, the extremely unlucky variation where you hit 0.7 every time would yield a net payoff of roughly -$1. The average net payoff, or expected value, would be (31/30)100-1, or roughly $27. So the variance (actual payoff - average payoff) is 406,561,177,535,215,209 for the luckiest variation and only -28 for the unluckiest variation. As follows, the expected value is extremely tilted by the high variance from impossibly lucky scenarios. You would need to be insanely lucky just to get anywhere close to the EV. Your own experiments illustrate this perfectly. The average net payoff for running 422 consecutive games 10 times is 10*(31/30)422-1, or roughly $10,220,338. Your actual net payoff was just more than $80, falling way short of the EV. Had you kept your entire bankroll running for all 4220 games, you would have lost almost everything. This was not a case of bad luck, but rather quite expected. Had you instead bet half your bankroll every time, with a starting bankroll of the same $10, for 4220 games, your expected outcome would have been ~10*1.049244220/6, or +$4,800,000,000,000,000. Feel free to simulate it, you will end up somewhere around that number. Your EV would of course be even higher, 10*(61/60)4220-1 or roughly +$2,000,000,000,000,000,000,000,000,000,000.
  14. Phil's post is equivalent to half of the prisoners guessing there is an even number of blue hats, and the other half guessing there is an even number of red hats. Which is equivalent to everyone guessing there is an even number of blue hats. A 51-49 distribution, for example, is enough to guarantee that everyone fails. He had the right idea, but switching between counting red as 1 and counting blue as 1 does not change the outcome of the guess.
×
×
  • Create New...