Jump to content
BrainDen.com - Brain Teasers

bonanova

Moderator
  • Posts

    6975
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by bonanova

  1. Five balls, A, B, C, D and E weigh, in some order, 1, 2, 3, 4, and 5 pounds. How many times must a balance scale be used to determine their weights?
  2. Each bet of $1 gives me a .99 chance of winning $1000 for an expectation of $990. 990 is the AM of one 0 and ninety nine 1000s. So I will play the game (method 1) as long as I can stay awake. There really is no practical way to show a failure for Method 2 other than to say eventually you will hit a zero payoff and lose your $1 bet, although the numbers get very large. Your stake grows by a factor 1000 each time the payoff is not zero. And you have to apply the payoffs 69 times for a 50% chance of getting 0. By then your stake is $10207. Applying the payoffs 916 times reduces the survival chances of your stake (and by then it's $102748) to .01. It's tempting to say you'd play that game multiple times as well. But that's still method 1. The reality is your stake remains finite and thus vulnerable to being wiped out by a 0 payoff. But the other reality is that so long as n remains finite, getting a 0 payoff is not a certainty. That is, (99/100)n is never zero. The strongest statement I can think of now is that as n increases you have a vanishingly small chance of winning an astronomical amount of money. But when your winnings will equal the number of electrons in the universe, the probability of taking them home will still not be zero. Even though applying the payoffs once per second might take you the age of the universe to get your stake that high. I didn't buy the (31/30)n argument in previous posts, because I sensed method 1 lurking in the reasoning. It's too easy to automatically think about playing a game multiple times and ending up ahead, vs playing one game forever. Forever is such an impractically long time. So I don't completely buy the current 990n growth rate either. But I look for an expression that goes to zero in the limit, and can't find it. In my post 30, there also was a small but non-zero chance of being ahead after 2 million pulls. Bravo. I fall on my sword.
  3. Can a Method-2 strategy win using the payoffs given in the OP? Using the {.7 .8 .9 1.1 1.2 1.5} payoffs of the OP, I took 2,000,000 random selections (pulls) and put them into 20 sets of 100,000 each. From this I calculated {twenty products}, and they ranged in value from 1.38 x 10-72 to a whopping 1.55 x 1072! Six of the twenty were >1; the median value was about 10-18. All payoffs were positive, of course, so the average was at least 1/20 of the highest value. The AM, in fact, was a whopping 7.74x1070. In the context of comparing Methods 1 and 2 from the OP, what can we say from these {twenty numbers}? We might assert that if we play Method 2 multiple times, say n times, where n is large enough, we'll get at least one whopping payoff which, when we add the results, will more than offset the $(n-1) we may have lost on the other games. Viola! Therefore we conclude that Method 2 is a winning strategy that pays off handsomely. In this case, where n was a meager 20 games, using Method 2, $20 became $1072. Wow. We will play that game any day of the week! Or we could consider the {20 numbers} to be a {set of representative payoffs} that apply to the 100,000-pull game. Let's play that game twenty times, using the strategy of Method 1. Thus, we bet $1. (Pull 100,000 times, Take our winnings. Bet another $1) Repeat (20 times). We win big. $20 became $1072. Wow. We will play that game any day of the week, too! Notice these stories describe the same actions. Not surprisingly they give the same results. The only difference is that first we say we are using Method 2. Then we say we are using Method 1. But in Method 2 there is no provision for periodically starting over with a fresh $1 and then at the end adding the results. That process is the heart of Method 1. To rightly apply a Method-2 strategy to these {20 numbers} we must multiply them. And that makes an amazing difference. Their sum is 1.55x1072; but their product (the result of betting $1, then pulling 2 million times) is a very disappointing 6.88x10-244. That is to say that our initial stake could have been as high as $10244, and we'd end up with enough money for a McDonald's #7 meal with a medium Coke. My favorite. Not surprisingly, the 2-millionth root of $6.88 x 10-244 is 0.9997200883, the GM of {.7 .8 .9 1.1 1.2 1.5}.
  4. This post refers to OP and the implications, if any, that stem from the fact that pulling twice for the 36 possible outcomes gives you a positive payoff. OP asks to compare two methods: M1 and M2. OP gives the payoffs as .7 .8 .9 1.1 1.2 1.5. and assures us they are not biased. Over time there is no preference. AM=1.0333 ... GM=0.9993061854 M1: Bet $1. (Keep your winnings; bet a new $1.) Repeat (.) Repeat means: do not stop after n pulls. M2: Bet $1. (Bet your winnings.) Repeat(.) Over time, M1 wins. (AM>1.) I will take player's side on this game. Over time, M2 loses. (GM<1.) I will take the house's side on this game. Variation: M3: Bet $1. (Pull twice. Keep your winnings. Bet a new $1.) Repeat(.) To be precise, pull (twice) only for each of the 36 outcomes then stop and average the outcomes. Over time this difference goes away. For the 36 outcomes (and over time as well,) M3 wins. I will take player's side on this game. There seem to be differences only about M2. There is a conjecture that (M3 wins) ==> (M2 wins). Let's revise the payoffs: .49 .56 .56 .63. 63. 64. .72 .72 .77 .77 .81 .84 .84 .88 .88 .96 .96 .99 .99 1.05 1.05 1.08 1.08 1.2 1.2 1.21 1.32 1.32 1.35 1.35 1.44 1.65 1.65 1.8 1.8 2.25 AM=1.0677777... GM=0.9993061854 Over time, M1 wins. (AM>1). I'll be the player here. Over time, M2 loses (GM<1). I'll be the house here. The second set of payoffs comprises the pairwise products of the first set. M2 (win or lose) has the same outcome in the two cases. It differs from M2 in the first case only because we look at results after even-numbered pulls. M1 still wins, and now it winds faster: AM is larger. M1 in the second case must have the same outcome over time as M3: the same actions are taken. M3 thus does not relate to M2 at all. Nor does not depend on GM. M3 is equivalent to M1 with better payoffs. It wins, therefore, only because AM>1. Conclusion: (M3 wins) =/=> (M2 wins).
  5. Not necessarily. What you say is correct: there are only two possibilities. But are they equally likely? Try thinking in terms of "they are not both boys." It's logically equivalent. Do you still get 1/2?
  6. Head spins. What he said. Nice to see you again.
  7. Add 0 and 10 to the set of payoffs: 0, 0.7, 0.8, 0.9, 1.1, 1.2, 1.5, 10. The arithmetic mean is now greater than 2 and the comparison of the methods is clear.
  8. I was not implying any actual "pulls" in my illustration. Nor did I say anything about withdrawing or adding any amounts to the stake. I did not mean it as an actual trial in the game. I meant it as the enumeration of all possible variations with their respective probabilities. I misunderstood. I inferred to to be an assertion that Method 2 was a winning strategy.
  9. I think the first part has been suggested for any N. But inserting the 3:1 condition is difficult when N is odd. Perhaps a general solution exists for a 2p x 2p grid. I'll think about that one.
  10. The six equally likely payoffs are .7 .8 .9 1.1 1.2 1.5.Their arithmetic mean AM is 1.033333 ...Their geometric mean GM is 0.9996530325. In Method 1 you pay $1.00 for each pull and remove your winnings.In Method 2 you pay $1.00 once and keep your stake at risk. Thus, AM (>1) gives you the expected return per dollar per pull for Method 1. (Winnings are added.)GM (<1) gives you the expected return per dollar per pull for Method 2. (Winnings are multiplied.) Post 3 paid $1.00, pulled the handle twice and then removed the winnings. This was done 36 times with the results averaged. This is not Method 2. That's not what I said in Post 3. It's actually Method 1 for a new set of payoffs: the 36 pairwise products of the original six payoffs. No, it is not Method 1. On the second turn the entire bankroll is staked -- not the original betting amount.This is a winning game, with an average return on a dollar bet of AM2.AM2 is in fact the arithmetic mean (AM) of the new set of payoffs.So this is one more example of a winning Method 1 result. To test Method 2 for 72 pulls of the handle, you bet $1.00 once and keep the result at risk for all 72 pulls.That final result is the just product of the 36 pairwise products, which is 0.9753235719 = GM72.And that is the expected result for Method 2: each pull multiplies your stake by GM. Again, why does Method 1 win and Method 2 lose? Because AM applies to added winnings and AM>1; GM applies to multiplied winnings and GM<1. Simulation: Pull the handle 100,000 times with random results: Method 1: Average return on each $1.00 bet is 1.032955 <--> AM = 1.0333 ... Method 2: Stake goes from $1.00 to $1.414863x10-31 = GM100000. The 100,000th root of 1.414863x10-31 is 0.9992899. <--> GM = 0.9996530325. There is an interesting thought provoking point there, presenting a kind of pseudo-paradox in this problem.I stand firmly by my solution in the post 3, though I am beginning to have some doubts about its straightforwardness property. The illustration I gave in support of the solution was not a proof. And since bonanova has misinterpreted my illustration, it must have been unintelligible. (It happens to me from time to time. In my head a statement I make seems precise and clear, but other people can't make out the ends of it.) So in the interest of befuddlement, perplexity, and creative thought... For anyone who is still not convinced, I am ready to play the game with a fair die, staking my entire bankroll on each roll of the die. (To speed up the process, we could roll 12 dice at a time.) It seemed clear what I implied about the pairwise case, but maybe it wasn't. On the second pull the entire stake is at risk. Correct. You subject your $1 to two pulls rather than to a single pull. What I did not say explicitly was that's equivalent to a single pull with a payoff equal to the product of the two payoffs. Then after the second pull you withdraw your winnings. They are never again at risk. What I said was that is equivalent to method 1, just using a different set of payoffs. You had 72 pulls, but you didn't bet just $1. You bet $36. To represent method 2, you can't put into play a fresh $1, and take out your winnings, every other pull You must leave your stake at risk for all 72 pulls. If you do, eventually the .7s will bring you down.
  11. Do you need an infinite stake to play Method 1, as you would in a Martingale scenario? Can't you feed in dollar bills until your stake exceeds a certain amount, then play with house money? After your stake reaches $2 you're basically starting over, with a free dollar to use.
  12. Where is Bushindo when you need him? Mr. Bayes.
  13. My son teaches HS English and suggested that I forced me to read the book. I have indelible images of George and Lenny, ones that seeing the movie did not erase. BTW, which grid were you inquiring about? The 8x8 moves, or 7x7 moves? The 8x8-move case actually appeared here a while back, so I already had the solution. I wasn't going to post it until I saw that no one else did.
  14. The first two answers have been given (post 2) and verified (post 5) by a 100,000-pull simulation. The discussion question elicited several thoughts (posts 5 7 8) that made the answers intuitive. Finally, the answers can be changed (post 7) by changing the payoffs.
  15. True. But naive analysis shows a shortfall. OP asked for blame, or cause, of "apparent" shortfall. That goes to Betty. Kind of. The blame should have had been lain with Betty's mother -- that scalleywag! How about that use of the past pluperfect in conjunction with futuristic pirate talk! Or it was because of the May 9th blackout the year previous.
  16. depends on the quality of the gun. Were the names not taken from "Of Mice and Men"?
  17. True. But naive analysis shows a shortfall. OP asked for blame, or cause, of "apparent" shortfall. That goes to Betty. Kind of.
  18. Can the guys choose the distance from the fulcrum they jump from? Do they get off the scale after they jump? As a clue, describe a first step, before and after -- it doesn't have to be the correct first step. I think this is a very interesting puzzle, it just needs to be understood a bit. Thanks.
  19. I suspect John Steinbeck and Robert Burns are lurking somewhere nearby.
×
×
  • Create New...