Comparing gambling systems

50 posts in this topic

Posted · Report post

So you're saying you wouldn't play a lottery with a 1/1,000,000 chance of winning $5,000,000 with a $1 ticket?

I'm saying that if $1 was my entire bankroll, I definitely wouldn't play that lottery. Sure, I could get lucky and win the big jackpot, and sure the EV is positive, but protecting your bankroll is just as important as finding favorable games. You can win in the short run playing unfavorable games as well, but in the long run you are going to lose. And if you don't protect your bankroll, you're going to lose in the long run.

If you're a recreational gambler with a steady job, and you spend a small part of your weekly salary on some weekend bets, hoping to cash in big so you can have a nice long holiday or early retirement, then bankroll management is not really an issue for you. Your bankroll is refilled on a regular basis, and you're not risking anything. However, I believe the appropriate definition of recreational gambling is that you're doing it for fun. If you're doing it to win money, if you're looking to make your living as a gambler, constantly looking for bets with a small edge to grind, knowing that if you do it right your profits will increase exponentially over time, until one day you're rich enough to retire, then you simply must know how to protect your bankroll. Otherwise you'll end up an addict, putting yourself in debt because you "know you can beat the game, you've just been unlucky so far".

0

Share this post


Link to post
Share on other sites

Posted · Report post

So you're saying you wouldn't play a lottery with a 1/1,000,000 chance of winning $5,000,000 with a $1 ticket?

+1

Precisely. For $1 you'd be buying $5 worth of chances (provided the pot can't be split between several winners.) That's how I play lottery. In gambling EV is what counts.

.....

Theorem: the probability of winning (or even getting at least a fixed non-zero return) approaches 0 as the number of rolls approaches infinity.

Proof: There are six equally likely outcomes of one roll, let's denote them a1 through a6 (a1 being 0.7 and ascending to a6 being 1.5). Let fi denote the relative frequency of the outcome ai during a sequence of rolls. For example, if we roll 12 times and get 1*a1, 3*a2, 0*a3, 1*a4, 2*a5, and 5*a6, then f1 = 1/12, f2 = 3/12, f3 = 0/12, f4 = 1/12, f5 = 2/12, and f6 = 5/12. The exact number of outcomes ai is the relative frequency fi times N. The final outcome of our game will be G = 0.7f1N*0.8f2N*0.9f3N*1.1f4N*1.2f5N*1.5f6N = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N.

The expected value of each fi is 1/6. By the law of large numbers, P(0.1666<fi<0.1667, for all i) approaches 1 as N approaches infinity. But if 0.1666<fi<0.1667, then G = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N. So the probability approaches 1 that your outcome is smaller than a value which approaches 0 as N approaches infinity. Conversely, for any given E>0, the probability approaches 0 that your outcome is at least E.

I think, you've misunderstood what I was asking to prove.

With N rolls there are 6N total variations. There are TW variations ending in a payoff P > 1 and TL variations ending with the final payoff P < 1. TW+TL=6N.

Find the limit of TW/6N as N tends to infinity.

I agree that "Expected Outcome" tends to zero as the number of rolls N tends to infinity. I just don't see it all that relevant to winning in this game. If this game was offered in casinos, I would have most certainly won millions in a matter of days starting with a bankroll of $100 or so. And I wouldn't let the entire $100 ride in just one sitting. Money management still counts. However, exponential accumulation of winnings should be the predominant scheme to win big.

Another sure thing is -- the Casino offering that game would go bust very quickly.

0

Share this post


Link to post
Share on other sites

Posted · Report post

So you're saying you wouldn't play a lottery with a 1/1,000,000 chance of winning $5,000,000 with a $1 ticket?

+1

Precisely. For $1 you'd be buying $5 worth of chances (provided the pot can't be split between several winners.) That's how I play lottery. In gambling EV is what counts.

.....

Theorem: the probability of winning (or even getting at least a fixed non-zero return) approaches 0 as the number of rolls approaches infinity.

Proof: There are six equally likely outcomes of one roll, let's denote them a1 through a6 (a1 being 0.7 and ascending to a6 being 1.5). Let fi denote the relative frequency of the outcome ai during a sequence of rolls. For example, if we roll 12 times and get 1*a1, 3*a2, 0*a3, 1*a4, 2*a5, and 5*a6, then f1 = 1/12, f2 = 3/12, f3 = 0/12, f4 = 1/12, f5 = 2/12, and f6 = 5/12. The exact number of outcomes ai is the relative frequency fi times N. The final outcome of our game will be G = 0.7f1N*0.8f2N*0.9f3N*1.1f4N*1.2f5N*1.5f6N = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N.

The expected value of each fi is 1/6. By the law of large numbers, P(0.1666<fi<0.1667, for all i) approaches 1 as N approaches infinity. But if 0.1666<fi<0.1667, then G = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N. So the probability approaches 1 that your outcome is smaller than a value which approaches 0 as N approaches infinity. Conversely, for any given E>0, the probability approaches 0 that your outcome is at least E.

I think, you've misunderstood what I was asking to prove.

With N rolls there are 6N total variations. There are TW variations ending in a payoff P > 1 and TL variations ending with the final payoff P < 1. TW+TL=6N.

Find the limit of TW/6N as N tends to infinity.

I agree that "Expected Outcome" tends to zero as the number of rolls N tends to infinity. I just don't see it all that relevant to winning in this game. If this game was offered in casinos, I would have most certainly won millions in a matter of days starting with a bankroll of $100 or so. And I wouldn't let the entire $100 ride in just one sitting. Money management still counts. However, exponential accumulation of winnings should be the predominant scheme to win big.

Another sure thing is -- the Casino offering that game would go bust very quickly.

I proved that the probability of a randomly selected variation ending in a payoff P>1 approaches 0 as N approaches infinity. TW/6N is the probability of a random variation ending in a payoff P>1. Hence I have proven that TW/6N approaches 0 as N approaches infinity. Simultaneously, I also proved that this is true not just for payoffs P>1, but for payoffs P>E, as long as E>0. For example, if we let Tp be the number of variations that give at least a penny back, then Tp/6N also approaches 0 as N approaches infinity.

I will try to explain my proof step by step:

Given a randomly created variation of N rolls, we can count the relative frequency of each result. Let f1 be the relative frequency of 0.7-rolls, which is defined as the number of 0.7-rolls divided by the total number of rolls N. It follows that the number of 0.7-rolls is f1*N. Similarly, let f2 be the relative frequency of 0.8-rolls, f3 the relative frequency of 0.9-rolls... and so on until f6, being the relative frequency of 1.5-rolls.

Now, consider the interval 0.1666<x<0.1667. Clearly 1/6 is an internal point in this interval. Because the expected values of f1 through f6 all equal 1/6, the law of large numbers guarantees the following as N approaches infinity: the probability approaches 1 that f1 through f6 will all be within this interval. A direct implication* of f1 through f6 all being within this interval is that the payoff is less than 1. Replacing the clause "f1 through f6 will all be within this interval" with its direct implication "the payoff is less than 1" in the bold-face text above, we get that the probability approaches 1 that the payoff is less than 1**. If the probability of A approaches 1, then the probability of (not A) approaches 0. So the probability approaches 0 that the payoff is not less than 1. Consequently the probability that you get a payoff P>1 approaches 0.

Since all variations are equally likely, the probability that you get a payoff P>1 is equal to the number of such variations divided by the total number of variations, i.e. TW/6N. So replacing "the probability that you get a payoff P>1" with its equal "TW/6N", we get the sought result: TW/6N approaches 0.

*I proved this implication in my original proof (the inequality part), but I will try to explain that proof as well. So what I'm trying to prove is the implication "if f1 through f6 are all within the interval 0.1666<x<0.1667, then the payoff is less than 1".

We can write the payoff as P = 0.7A*0.8B*0.9C*1.1D*1.2E*1.5F, where A is the number of 0.7-rolls, B is the number of 0.8-rolls, etc. But as already stated, the number of 0.7-rolls is f1*N, the number of 0.8-rolls is f2*N, etc. So replacing A through F with f1*N through f6*N, and extracting the common exponent N, we get the payoff as P = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N. This function will increase as f1,f2, and f3 decrease (because they are exponents for base numbers less than 1), and it will increase as f4,f5, and f6 increase (because they are exponents for base numbers greater than 1). So to maximize the payoff, we should minimize f1 through f3, and maximize f4 through f6. Setting f1 through f3 as 0.1666 and f4 through f6 as 0.1667, we get an upper bound for the payoff: P < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N, which can be verified by calculator to imply P < 0.9998N, which in turn implies P < 1.

**If B is a direct implication of A, then P(B) is greater than or equal to P(A). Since P(A) approaches 1, and P(A) <= P(B) <= 1, it follows that P(B) approaches 1 as well. So we can replace the original clause with its direct implication.

0

Share this post


Link to post
Share on other sites

Posted · Report post

This post refers to OP and the implications, if any, that stem from the fact that pulling twice for the 36 possible outcomes gives you a positive payoff.

OP asks to compare two methods: M1 and M2.
OP gives the payoffs as .7 .8 .9 1.1 1.2 1.5. and assures us they are not biased.
Over time there is no preference.

AM=1.0333 ...
GM=0.9993061854

M1: Bet $1. (Keep your winnings; bet a new $1.) Repeat (.) Repeat means: do not stop after n pulls.
M2: Bet $1. (Bet your winnings.) Repeat(.)

Over time, M1 wins. (AM>1.) I will take player's side on this game.
Over time, M2 loses. (GM<1.) I will take the house's side on this game.

Variation:

M3: Bet $1. (Pull twice. Keep your winnings. Bet a new $1.) Repeat(.)
To be precise, pull (twice) only for each of the 36 outcomes then stop and average the outcomes.
Over time this difference goes away.

For the 36 outcomes (and over time as well,) M3 wins. I will take player's side on this game.

There seem to be differences only about M2.

There is a conjecture that (M3 wins) ==> (M2 wins).

Let's revise the payoffs:
.49 .56 .56 .63. 63. 64. .72 .72 .77 .77 .81 .84 .84 .88 .88 .96 .96 .99 .99 1.05 1.05 1.08 1.08 1.2 1.2 1.21 1.32 1.32 1.35 1.35 1.44 1.65 1.65 1.8 1.8 2.25

AM=1.0677777...
GM=0.9993061854

Over time, M1 wins. (AM>1). I'll be the player here.
Over time, M2 loses (GM<1). I'll be the house here.

The second set of payoffs comprises the pairwise products of the first set.

M2 (win or lose) has the same outcome in the two cases.

It differs from M2 in the first case only because we look at results after even-numbered pulls.

M1 still wins, and now it winds faster: AM is larger.
M1 in the second case must have the same outcome over time as M3: the same actions are taken.

M3 thus does not relate to M2 at all. Nor does not depend on GM.

M3 is equivalent to M1 with better payoffs. It wins, therefore, only because AM>1.

Conclusion: (M3 wins) =/=> (M2 wins).

0

Share this post


Link to post
Share on other sites

Posted · Report post

Can a Method-2 strategy win using the payoffs given in the OP?

Using the {.7 .8 .9 1.1 1.2 1.5} payoffs of the OP, I took 2,000,000 random selections (pulls) and put them into 20 sets of 100,000 each. From this I calculated {twenty products}, and they ranged in value from 1.38 x 10-72 to a whopping 1.55 x 1072! Six of the twenty were >1; the median value was about 10-18. All payoffs were positive, of course, so the average was at least 1/20 of the highest value.

The AM, in fact, was a whopping 7.74x1070.

In the context of comparing Methods 1 and 2 from the OP, what can we say from these {twenty numbers}?

  1. We might assert that if we play Method 2 multiple times, say n times, where n is large enough, we'll get at least one whopping payoff which, when we add the results, will more than offset the $(n-1) we may have lost on the other games. Viola! Therefore we conclude that Method 2 is a winning strategy that pays off handsomely. In this case, where n was a meager 20 games, using Method 2, $20 became $1072. Wow. We will play that game any day of the week!
  2. Or we could consider the {20 numbers} to be a {set of representative payoffs} that apply to the 100,000-pull game. Let's play that game twenty times, using the strategy of Method 1. Thus, we bet $1. (Pull 100,000 times, Take our winnings. Bet another $1) Repeat (20 times). We win big. $20 became $1072. Wow. We will play that game any day of the week, too!

Notice these stories describe the same actions. Not surprisingly they give the same results. The only difference is that first we say we are using Method 2. Then we say we are using Method 1.

But in Method 2 there is no provision for periodically starting over with a fresh $1 and then at the end adding the results. That process is the heart of Method 1. To rightly apply a Method-2 strategy to these {20 numbers} we must multiply them. And that makes an amazing difference. Their sum is 1.55x1072; but their product (the result of betting $1, then pulling 2 million times) is a very disappointing 6.88x10-244. That is to say that our initial stake could have been as high as $10244, and we'd end up with enough money for a McDonald's #7 meal with a medium Coke. My favorite.

Not surprisingly, the 2-millionth root of $6.88 x 10-244 is 0.9997200883, the GM of {.7 .8 .9 1.1 1.2 1.5}.

0

Share this post


Link to post
Share on other sites

Posted (edited) · Report post

I will now prove that calculating a geometric mean does not necessarily lead to an accurate conclusion, by counterexample.

I offer you a game where you pay $1 to have a 99/100 chance of winning $1000 and a 1/100 chance of losing your wager. The geometric mean of the possible outcomes is zero, so it would be foolish to play in such a game if you repeatedly bet your bankroll(?)

You can argue that playing an infinite number of times would guarantee that you lose your wager, but for any sensible number of plays it'd be a no-brainer. I'd even argue that after a Large number of plays, the infinitesimal chance you'll win those games times the payoff if you do win (which is very large as far as large numbers go) would make the game worthwhile.

Edited by plasmid
0

Share this post


Link to post
Share on other sites

Posted · Report post

Each bet of $1 gives me a .99 chance of winning $1000 for an expectation of $990.
990 is the AM of one 0 and ninety nine 1000s.
So I will play the game (method 1) as long as I can stay awake.

There really is no practical way to show a failure for Method 2 other than to say eventually you will hit a zero payoff and lose your $1 bet, although the numbers get very large. Your stake grows by a factor 1000 each time the payoff is not zero. And you have to apply the payoffs 69 times for a 50% chance of getting 0. By then your stake is $10207. Applying the payoffs 916 times reduces the survival chances of your stake (and by then it's $102748) to .01. It's tempting to say you'd play that game multiple times as well. But that's still method 1.

The reality is your stake remains finite and thus vulnerable to being wiped out by a 0 payoff. But the other reality is that so long as n remains finite, getting a 0 payoff is not a certainty. That is, (99/100)n is never zero. The strongest statement I can think of now is that as n increases you have a vanishingly small chance of winning an astronomical amount of money. But when your winnings will equal the number of electrons in the universe, the probability of taking them home will still not be zero. Even though applying the payoffs once per second might take you the age of the universe to get your stake that high.

I didn't buy the (31/30)n argument in previous posts, because I sensed method 1 lurking in the reasoning. It's too easy to automatically think about playing a game multiple times and ending up ahead, vs playing one game forever. Forever is such an impractically long time. So I don't completely buy the current 990n growth rate either. But I look for an expression that goes to zero in the limit, and can't find it.

In my post 30, there also was a small but non-zero chance of being ahead after 2 million pulls.

Bravo. I fall on my sword.

0

Share this post


Link to post
Share on other sites

Posted (edited) · Report post

When playing indefinitely, you cannot cash in your winnings and spend the money on this side of Infinity. And on that other side, who will care how much you've won, or what fraction of $1 you have remaining?

Strange things happen at the infinity:

1. You are expected to end up with nothing, if you ride your winnings infinite number of times.

2. That is because the most likely outcome is the one where each of the six possible roll values will have appeared the same number of times. The probability of that most likely outcome is zero. (That requires a proof though. See the spoiler.)

3. Your average payoff after infinite number of rolls should be infinite, if you bet just $1 on every roll.

4. Your average payoff if you ride your winnings is going to be even more infinite. In fact, it is going to be infinitely greater than that from the previous point (3). That is because when riding your winnings, the infinite variable is an exponent whereas when betting a fixed amount on each roll the infinite variable is a multiplier.

5. To see your average payoff, you must play infinite number of sets of infinite number of rolls each. You need not live forever to go on that gambling binge. Just roll an infinite number of dice every time and roll them infinitely fast.

Let's take N in the increments of 6: N = 6, 12, 18, …


The number of all possible variations for N rolls of a die where each of the 6 values was encountered exactly N/6 times is: A = N! / ((N/6)!)6.

The total number of variations for N rolls is T = 6N. Thus the probability of that median string is M = A / T.
What is the limit of M as N tends to infinity? I can't tell just by looking at it. Note that the ratio M grows smaller as N gets larger, but less and less so.
I.e., MN+1 = MN*(N+1)*(N+2)*(N+3)*(N+4)*(N+5)*(N+6) / (N+6)6.
The multiplier is less than 1, but it tends to 1 as N gets larger. I say, let math majors figure out, why this expression tends to zero.
At any rate, you have about 50% chance or better that your very own infinite string of rolls has finished up ahead of that median payoff calculated by means geometrical. Still, very likely, the payoff is going to be zero.

Coming back from the Infinity, as I said here before, for all practical purposes, it is impossible to lose in this game. And you must ride your winnings, if you want to win big.

The Expected Value (average payoff) for a 1000 consecutive rolls is (1.0333...)1000 = 1.74*1014. You cannot run enough experiments to confirm it empirically. Nonetheless, the winnings are good.

In order to help its patrons to win large sums of money, the casino added a small modification to the game. Namely, the minimum bet requirement of $1. So while rolling the dice and riding your winnings, if your total falls below $1, you must add enough cents to make your next bet at least $1.

Having obtained change for

$20 in coins I set to play few games of 1,000 consecutive rolls each starting with $1. If I rolled a single die by hand, 1,000 rolls would take me couple hours. (Could be more between ordering the drinks and having a conversation with a dealer.) I have repeated the experiment 10 times (a week's worth visits to the casino.)
The total amounts I had to add to make the minimum bet in the course of 1000-roll settings ranged from $0.40 to $15.45. Four of the times I suffered a frustrating spell of bad luck, netting the losses between $3.52 and $8.26. The other six times I won, netting between $0.11 and $624,192,886.70. There were couple other wins in tens of thousands of $$.
Running 10 averages for 100 of 1000-roll sets, I found average amounts of additions to the pot between $4.73 and $6.63. The average wins ranged between $2,321,368 and $249,479,144,535. (Of course, you must multiply that by 100 to calculate your total take home.) Most averaged wins were in hundreds of millions or more, and there were no average losses.

For comparison a typical 1000-roll set of betting just $1 at a time would net somewhere around $33.33.

To sum up:

In this game having a bankroll of $20, and in reasonable amount of gambling time (few days)

1) If you ride your winnings, you most likely end up with the winnings of millions of $$.

2). When betting $1 at a time, in the same amount of time, you'll most certainly walk away with few hundred of $$. (Frankly, not fun and not worth your time.)

Edited by Prime
0

Share this post


Link to post
Share on other sites

Posted · Report post

Now's my turn to disagree with Prime.

If you ride your winnings, the most likely outcome is not that you will win millions. The most likely outcome is that you will lose money. It's just that your payoff if you do win is much larger than your odds of losing, like a favorable lottery.

The mean outcome of always letting everything ride is greater than your initial $1 entry wager.

The median outcome of always letting everything ride is less than your initial $1.

If there's doubt about this, it could be tested by simulating a bunch of runs and calculating both the mean and median.

Which of those two, the mean or median, is "most important" is a philosophical argument.

0

Share this post


Link to post
Share on other sites

Posted (edited) · Report post

Now's my turn to disagree with Prime.

If you ride your winnings, the most likely outcome is not that you will win millions. The most likely outcome is that you will lose money. It's just that your payoff if you do win is much larger than your odds of losing, like a favorable lottery.

The mean outcome of always letting everything ride is greater than your initial $1 entry wager.

The median outcome of always letting everything ride is less than your initial $1.

If there's doubt about this, it could be tested by simulating a bunch of runs and calculating both the mean and median.

Which of those two, the mean or median, is "most important" is a philosophical argument.

But I have run a bunch of simulations. And some numeric analysis as well. Enough to convince myself. See my posts 12, 20, 22, and 33 inside the spoilers.

Riding your winnings for 1000 rolls, seems like a good choice. That won millions in my simulations in very short order of time.

At this point I feel that for further discussion, we need to solve analytically what is probability of winning X $$ or more while riding for N consecutive rolls. E.g., $1,000,000 or more after 1000 rolls. For that one must find exact number of 1000-roll variations yielding more than 1,000,000 and divide that number by 61000.

I don't feel up to the task at the moment. But I can provide any such data for up to 12 rolls.

Edited by Prime
0

Share this post


Link to post
Share on other sites

Posted (edited) · Report post

It always amazes me when people can take what I thought to be a simple fun question and derive much complicated analysis out of it. Surprisingly this question came from a discussion with my 7 year old child.

Edited by BMAD
0

Share this post


Link to post
Share on other sites

Posted · Report post

Oh sorry, I didn't see that you already had the spoiler in post 22 showing that there's a <50% chance of ending up with $1 or more after repeatedly letting everything ride. So that shows that the median and most likely outcome is in fact a loss.

0

Share this post


Link to post
Share on other sites

Posted · Report post

Prime, if you have a bankroll of $20 and can run method 2 twenty times, then of course you're going to win. No one here is arguing against that. But method 2 in OP does not say anything about you being able to reset your stake at $1 whenever you feel like it. You get one shot. One. Shot. For all purposes your bankroll is $1. If you lose you lose. The most likely result is you will lose. Your simulations all show that. If you run 10 simulations of 1000 rolls each and 6 of them are losing, it means 6/10 people will lose. The other four people will win. As N grows larger the probability of winning approaches 0, as I have proven mathematically. Which means, if all 7,000,000,000 people in the world run method 2 for long enough, the probability approaches 1 that they will all lose.

0

Share this post


Link to post
Share on other sites

Posted (edited) · Report post

It always amazes me when people can take what I thought to be a simple fun question and derive much complicated analysis out of it. Surprisingly this question came from a discussion with my 7 year old child.

What makes these topics so interesting to me, is my lack of education. Math education in particular.

Oh sorry, I didn't see that you already had the spoiler in post 22 showing that there's a <50% chance of ending up with $1 or more after repeatedly letting everything ride. So that shows that the median and most likely outcome is in fact a loss.

Perusing your argument from the post 24, imagine a casino offered you a choice between two games, 1000 dice roll each:

1) Stake $1 with a 5% chance of winning $1,000,000 or more; OR

2) Deposit $300 with a 50% chance of winning something between $16 and $50, while in the extremely unlikely cases losing your entire $300 or winning $500.

Which game would you play? (The percentages here are illustration -- not an actual calculation.)

Prime, if you have a bankroll of $20 and can run method 2 twenty times, then of course you're going to win. No one here is arguing against that. But method 2 in OP does not say anything about you being able to reset your stake at $1 whenever you feel like it. You get one shot. One. Shot. For all purposes your bankroll is $1. If you lose you lose. The most likely result is you will lose. Your simulations all show that. If you run 10 simulations of 1000 rolls each and 6 of them are losing, it means 6/10 people will lose. The other four people will win. As N grows larger the probability of winning approaches 0, as I have proven mathematically. Which means, if all 7,000,000,000 people in the world run method 2 for long enough, the probability approaches 1 that they will all lose.

I think, what we are arguing here is called Normal Distribution, or Gaussian Distribution, or Bell Curve, or some other such name.

The math formulas describing this concept, involve

Integrals, and special numbers such as Pi and e. I was too lazy to even start studying things like that when I was young, let alone now when it's been years since my daughter has finished her high school.

However, I would be interested to see someone here to post a mathematical formula for calculating the Probability of ending up with the amount of X or more after N consecutive turns of riding the winnings. I would make a sincere effort to understand such an equation and, maybe, even try to disprove it.

If we had just two possible values, it could be a straightforward borrowing of a formula with integrals from an appropriate math book or website. Having 6 possible values seems to complicate matters a bit. It is not all that easy to see exactly how the Payoff is affected by deviations from the Median.

The Median Payoff tends to zero, as noted by bonanova at the very beginning of this topic. If you ride your winnings to infinity, you are more likely to end up with zero.

However, I am not at all convinced that the probability of ending up with a positive outcome is zero for infinite trials. To prove that, one must study the correlation between the deviations from the Median Variation and the resulting Payoff. After all, at the infinity there will be infinite number of variations with positive payoffs.

In other words, even if you show that the probability of the ratio of variants within any given interval around the Median tends to 1, you'd still have to show that the resulting Payoff within that interval is zero.

For riding your winnings for 12 consecutive rolls, the Average Payoff (EV) is approximately 1.4821. (1.0333...)12

The chance of ending up with $1.48 or more after 12 rolls is about 0.32774 (almost one third.)

The chance of ending up with $7 or more after 12 rolls is just over 0.0145 (a respectable 1.45%).

The chance of winning $6 in a game when you bet $1 at a time is 1/612 , or nil. (And that's the most you can possibly win.)

10 averages of 100 trials of 1000-dice rolls:

Lowest average payoff: $26,807

Highest average payoff: $199,503,018.

Overall percentage of ending up ahead after 1000 rolls: 48.78%.

In view of the above, I think, the wining likelihood of a 1000-roll ride is being underestimated.

Edited by Prime
0

Share this post


Link to post
Share on other sites

Posted · Report post

In other words, even if you show that the probability of the ratio of variants within any given interval around the Median tends to 1, you'd still have to show that the resulting Payoff within that interval is zero.

Done that. Twice. If the ratios are all within the interval 0.1666<x<0.1667 then the resulting payoff is always less than 1, and it tends to zero as N tends to infinity. An upper bound for the payoff is: Payoff < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N < 1. In plain words: over time, one hundred percent of the variations will be close enough to the median variation, that their payoff is zero.

To understand why this matters, do not concern yourself with the nominal values of the stakes and payoffs, but rather with their effect on your bankroll.

If your bankroll is $1, and you bet $1 and hit 0.7, your bankroll is reduced from 1 to 0.7, a change by the factor 0.7. Here the nominal change equals the bankroll change.

However, if your bankroll had been $2 instead, and you bet $1 and hit 0.7, your bankroll would be reduced from 2 to 1.7, a change by the factor 0.85. Because you only bet half your bankroll, the change is not as dramatic. If you bet half your bankroll instead of your entire bankroll, the payoffs would result in a change of 0.85, 0.9, 0.95, 1.05, 1.1, or 1.25. The geometrical mean of these changes is (0.85*0.9*0.95*1.05*1.1*1.25)1/6 ~ 1.008, which means you now have a positive expected effect on your bankroll. If you bet half your bankroll every time, you can expect it to grow exponentially (1.008N).

Now we might ask, when is the inflection point between a good game and a bad game? How much of your bankroll can you bet before you expect to lose money?

Well, if you bet the fraction x of your bankroll, the payoffs would be 0.7x, 0.8x, 0.9x, 1.1x, 1.2x, and 1.5x. Adding the 1-x you have remaining on the side, the change in bankroll would be 1-0.3x, 1-0.2x, 1-0.1x, 1+0.1x, 1+0.2x, or 1+0.5x. The geometrical mean would be GM = ((1-0.3x)(1-0.2x)(1-0.1x)(1+0.1x)(1+0.2x)(1+0.5x))1/6, which can be solved for x, at GM=1. The solution in the interval 0<x<1 is a bit over 0.989, which means the game is good for you as long as you bet between 0% and 98,9% of your bankroll.

But why am I not concerned with the nominal values of the winnings, but rather their effect on my bankroll? Consider this: if you had a $1,000,000 total fortune (this includes your house and valuables), would you risk it all for a 50% chance to triple it? Would you risk it all for an 80% chance to double it? Would you risk $999,000 of it for a 60% chance to double it? Very few people would be silly enough to take those risks, even though the EV is very good. The true difference between having one million and being dead broke is a lot more dramatic than the difference between having two millions and having one million, but the nominal difference is the same.

0

Share this post


Link to post
Share on other sites

Posted · Report post

...........

Theorem: the probability of winning (or even getting at least a fixed non-zero return) approaches 0 as the number of rolls approaches infinity.

Proof: There are six equally likely outcomes of one roll, let's denote them a1 through a6 (a1 being 0.7 and ascending to a6 being 1.5). Let fi denote the relative frequency of the outcome ai during a sequence of rolls. For example, if we roll 12 times and get 1*a1, 3*a2, 0*a3, 1*a4, 2*a5, and 5*a6, then f1 = 1/12, f2 = 3/12, f3 = 0/12, f4 = 1/12, f5 = 2/12, and f6 = 5/12. The exact number of outcomes ai is the relative frequency fi times N. The final outcome of our game will be G = 0.7f1N*0.8f2N*0.9f3N*1.1f4N*1.2f5N*1.5f6N = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N.

The expected value of each fi is 1/6. By the law of large numbers, P(0.1666<fi<0.1667, for all i) approaches 1 as N approaches infinity. But if 0.1666<fi<0.1667, then G = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N. So the probability approaches 1 that your outcome is smaller than a value which approaches 0 as N approaches infinity. Conversely, for any given E>0, the probability approaches 0 that your outcome is at least E.

I like the reasoning, but the proof by example uses a slanted interval. The true Median of 1/6 is not in the middle of the interval 0.1666 to 0.16667. What happens if you use a truly median interval, e.g., from 9/60 to 11/60? It seems to be leading to the opposite conclusion.

0

Share this post


Link to post
Share on other sites

Posted · Report post

There is that little technicality in the above post. And then there are couple other points to make. Suppose, each of the 6 individual variants deviate within a chosen interval with a probability approaching 1. Don't we need to show that the ratio of all combinations of the deviating variants to all possible combinations (of 6N, where N tends to infinity) also approaches 1?

And then, of course, that “Law of Big Numbers”. Who passed, that law? When was it ratified? How is it enforced? The entire proof is riding on it.

But, I suppose, that is beyond the scope of this forum, unless there is some clever and funny way of demonstrating that Normal Distribution thingy without using any integrals. So never mind that.

If I accept the proof for ending up with nothing after riding my winnings infinite number of times, there are only few differences remaining in the interpretation of the OP and gambling philosophies.

1) Does BMAD's casino compel its patrons to play forever once they started? I hope -- not. At least the OP does not mention it. (Could be in the fine print, though:)

2) While I am being held to playing a pure form of riding the winnings (Post#38), somehow, Rainman is allowed a modification of betting just a half of his bankroll on each turn (Post#40). Which is a very good prudent winning form, but I like my modification (Post#33) with the minimum bet requirement of $1 and a bankroll of $20 better -- it seems faster and more exciting.

3) Suppose we have limited time, limited resources, and BMAD allows his patrons to leave the game whenever they want to cash in their winnings. Then what is better: showing up at the casino with a bunch of singles and betting exactly $1 on each turn until closing time; or walking in with just $1 and riding your entire bankroll on each turn until satisfied with the amount won (more than a million) or rolling the die for 1000 times, whichever comes first?

The riding seems more appealing to me. And 1000 consecutive rides look very good. After all, it is a long long way from 1000 to infinity.

With 1000-roll riding, you cannot lose more than $1;

your overall chance of ending up ahead looks like 48.78%;

and your chance of winning more than a Million $$ looks like 4%, or better*.

Whereas betting $1 at a time, you are basically working for $33 all day long.

So choose.

*My simulation does not stop at the Million $$, but keeps rolling until 1000 rolls are done. In so doing it loses in some cases the millions procured in the interim, or wins some crazy amounts to the tune of 1012, which BMAD's casino is not able to honor.

Again, I invite anyone, who feels up to the challenge, to post theoretical justification (or disproof) to the statistics of a 1000-roll ride presented here.

0

Share this post


Link to post
Share on other sites

Posted · Report post

...........

Theorem: the probability of winning (or even getting at least a fixed non-zero return) approaches 0 as the number of rolls approaches infinity.

Proof: There are six equally likely outcomes of one roll, let's denote them a1 through a6 (a1 being 0.7 and ascending to a6 being 1.5). Let fi denote the relative frequency of the outcome ai during a sequence of rolls. For example, if we roll 12 times and get 1*a1, 3*a2, 0*a3, 1*a4, 2*a5, and 5*a6, then f1 = 1/12, f2 = 3/12, f3 = 0/12, f4 = 1/12, f5 = 2/12, and f6 = 5/12. The exact number of outcomes ai is the relative frequency fi times N. The final outcome of our game will be G = 0.7f1N*0.8f2N*0.9f3N*1.1f4N*1.2f5N*1.5f6N = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N.

The expected value of each fi is 1/6. By the law of large numbers, P(0.1666<fi<0.1667, for all i) approaches 1 as N approaches infinity. But if 0.1666<fi<0.1667, then G = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N. So the probability approaches 1 that your outcome is smaller than a value which approaches 0 as N approaches infinity. Conversely, for any given E>0, the probability approaches 0 that your outcome is at least E.

I like the reasoning, but the proof by example uses a slanted interval. The true Median of 1/6 is not in the middle of the interval 0.1666 to 0.16667. What happens if you use a truly median interval, e.g., from 9/60 to 11/60? It seems to be leading to the opposite conclusion.

I chose a slanted interval because it's shorter to write out. It has no bearing on the conclusion. If you insist on a median interval, we can use the interval 99999/600000<x<100001/600000 instead. The probability of the relative frequencies being in this interval approaches 1. If 99999/600000<x<100001/600000, then we can certainly conclude that 0.1666<x<0.1667, which I have already proven to imply that the payoff is less than 1 and tends to 0.

There is that little technicality in the above post. And then there are couple other points to make. Suppose, each of the 6 individual variants deviate within a chosen interval with a probability approaching 1. Don't we need to show that the ratio of all combinations of the deviating variants to all possible combinations (of 6N, where N tends to infinity) also approaches 1?

And then, of course, that “Law of Big Numbers”. Who passed, that law? When was it ratified? How is it enforced? The entire proof is riding on it.

But, I suppose, that is beyond the scope of this forum, unless there is some clever and funny way of demonstrating that Normal Distribution thingy without using any integrals. So never mind that.

If I accept the proof for ending up with nothing after riding my winnings infinite number of times, there are only few differences remaining in the interpretation of the OP and gambling philosophies.

1) Does BMAD's casino compel its patrons to play forever once they started? I hope -- not. At least the OP does not mention it. (Could be in the fine print, though:)

2) While I am being held to playing a pure form of riding the winnings (Post#38), somehow, Rainman is allowed a modification of betting just a half of his bankroll on each turn (Post#40). Which is a very good prudent winning form, but I like my modification (Post#33) with the minimum bet requirement of $1 and a bankroll of $20 better -- it seems faster and more exciting.

3) Suppose we have limited time, limited resources, and BMAD allows his patrons to leave the game whenever they want to cash in their winnings. Then what is better: showing up at the casino with a bunch of singles and betting exactly $1 on each turn until closing time; or walking in with just $1 and riding your entire bankroll on each turn until satisfied with the amount won (more than a million) or rolling the die for 1000 times, whichever comes first?

The riding seems more appealing to me. And 1000 consecutive rides look very good. After all, it is a long long way from 1000 to infinity.

With 1000-roll riding, you cannot lose more than $1;

your overall chance of ending up ahead looks like 48.78%;

and your chance of winning more than a Million $$ looks like 4%, or better*.

Whereas betting $1 at a time, you are basically working for $33 all day long.

So choose.

*My simulation does not stop at the Million $$, but keeps rolling until 1000 rolls are done. In so doing it loses in some cases the millions procured in the interim, or wins some crazy amounts to the tune of 1012, which BMAD's casino is not able to honor.

Again, I invite anyone, who feels up to the challenge, to post theoretical justification (or disproof) to the statistics of a 1000-roll ride presented here.

http://en.wikipedia.org/wiki/Law_of_large_numbers

It is a proven theorem.

0

Share this post


Link to post
Share on other sites

Posted · Report post

This puzzle seemed to merit an analysis that went further than I had taken it.

Especially since most of my analysis was dead wrong. :duh:

geometric.pdf

0

Share this post


Link to post
Share on other sites

Posted · Report post

This puzzle seemed to merit an analysis that went further than I had taken it.

Especially since most of my analysis was dead wrong. :duh:

attachicon.gifgeometric.pdf

Excellent insight, bonanova. The idea of using log is brilliant. However, my coding (and theoretical results) disagree with the simulation shown you and prime.

Using bonanova's insight of using log and the central limit theorem (see attached pdf for detail), it is easy to construct the probability of winning (larger bankroll at the end) for any number of games N if we use the strategy of letting the winning ride.

Incidentally, the theoretical result using the gaussian curve happens to answer prime's inquiry showing that the chance of winning approaches 0 as the number of games approach infinity. It also provides a statistical distribution for the case where N = 1000.

If N is 1000, then my simulation and theoretical calculation agree that the chance of winning is about 2.12%. However, both bonanova and prime agree that the chance of winning is about 48% or so.

I'm not sure what is causing the difference in the simulation. (And the fact that both bonanova and prime have the same result indicates that there might be an obvious logic that I might have missed). Perhaps either bonanova or prime could elaborate on how the simulation was done?

winning.pdf

0

Share this post


Link to post
Share on other sites

Posted · Report post

I get similar numbers as bonanova and prime.

I did 1000 trials of playing 1000 games per trial.

Average final bankroll ~ $2E10

Median final bankroll = $0.49

Number of winning trials = 465

If you're on a Windows machine with a Java JDK installed (free to download from Oracle if you don't have it already): copy the code below and paste it into a txt file called LetItRide.java, then on a command prompt in that directory type javac LetItRide.java to compile it, then type java LetItRide to run it.

import java.util.Random;
import java.util.Arrays;
class LetItRide {
  static int trials = 1000;
  static int games = 1000;

  public static void main(String[] args) {
    double[] bankroll;
    double[] multiplier = {0.7, 0.8, 0.9, 1.1, 1.2, 1.5};
    int trialnum, gamenum;
    double sumOfTrials; int winningTrials;
    Random randnums;

    bankroll = new double[trials];
    sumOfTrials = 0; winningTrials = 0;
    randnums = new Random();

    for(trialnum=0; trialnum<trials; trialnum++) {
      bankroll[trialnum] = 1;
      for(gamenum=0; gamenum<games; gamenum++) {
        bankroll[trialnum] *= multiplier[randnums.nextInt(6)];
      }
      sumOfTrials += bankroll[trialnum];
      winningTrials += (bankroll[trialnum]>1) ? 1:0;
    }
    System.out.println("Average final bankroll: "+sumOfTrials/trials);
    Arrays.sort(bankroll);
    System.out.println("Median final bankroll: "+bankroll[trials/2]);
    System.out.println("Numer of winning trials: "+winningTrials);
  }
}
0

Share this post


Link to post
Share on other sites

Posted · Report post

I get similar numbers as bonanova and prime.

I did 1000 trials of playing 1000 games per trial.

Average final bankroll ~ $2E10

Median final bankroll = $0.49

Number of winning trials = 465

If you're on a Windows machine with a Java JDK installed (free to download from Oracle if you don't have it already): copy the code below and paste it into a txt file called LetItRide.java, then on a command prompt in that directory type javac LetItRide.java to compile it, then type java LetItRide to run it.

import java.util.Random;
import java.util.Arrays;
class LetItRide {
  static int trials = 1000;
  static int games = 1000;

  public static void main(String[] args) {
    double[] bankroll;
    double[] multiplier = {0.7, 0.8, 0.9, 1.1, 1.2, 1.5};
    int trialnum, gamenum;
    double sumOfTrials; int winningTrials;
    Random randnums;

    bankroll = new double[trials];
    sumOfTrials = 0; winningTrials = 0;
    randnums = new Random();

    for(trialnum=0; trialnum<trials; trialnum++) {
      bankroll[trialnum] = 1;
      for(gamenum=0; gamenum<games; gamenum++) {
        bankroll[trialnum] *= multiplier[randnums.nextInt(6)];
      }
      sumOfTrials += bankroll[trialnum];
      winningTrials += (bankroll[trialnum]>1) ? 1:0;
    }
    System.out.println("Average final bankroll: "+sumOfTrials/trials);
    Arrays.sort(bankroll);
    System.out.println("Median final bankroll: "+bankroll[trials/2]);
    System.out.println("Numer of winning trials: "+winningTrials);
  }
}

Thanks for the code. I think I see what is causing the discrepancy between the simulation results. I made a mistake in transcribing the game probabilities...

The correct game probabilities are (.7, .8, .9, 1.1, 1.2, 1.5), but I used (.7, .8, .9, 1, 1.2, 1.5), which causes the discrepancies in results. Let ON be the bankroll at the end of N consecutive games that let the winnings ride. Using the correct game probabilities,

log( ON ) ~ Gaussian( -0.0003470277 * N, 0.2565272 sqrt(N) )

That is, the log of ON is normally distributed with mean -0.0003470277 * N and standard deviation 0.2565272 sqrt(N).

As shown in post #45, we can calculate the probability of ending up ahead for any N using the distribution above. For N=1,000, the chance of winning is 0.4829388, which is consistent with the simulation results from bonanova, prime, and plasmid. For N=1,000,000, the chance of winning is 0.0880612. If N= 100,000,000, then the chance of ending up with a larger bankroll than 1 at the end is 5.351129e-42.

0

Share this post


Link to post
Share on other sites

Posted · Report post

So it seems the following statements are true?

  1. We can't have infinite pulls, but we can see trends as the number of pulls gets big.
  2. Introducing multiple players makes the game much like a favorable lottery with a group of participants: almost all the individuals will lose. But if enough players participate, (significantly more than N if the winning odds are 1/N,) there should be at least one winner; and the winnings turn out to be great enough that the group will win. That is, the winnings of the few will cover the $1 stakes of the many, with money left over.
  3. Introducing multiple players (each with his own fresh $1 stake and then looking at the aggregate payoffs to determine whether the game is won or lost) is indistinguishable from an individual person playing multiple times, periodically refreshing his $1 stake and aggregating his winnings. This is the essence of a Method 1 game whose only difference is the fact that it is played using a different set of payoffs.
  4. If the payoffs of multiple players are multiplied rather than aggregated, the probability with which the group loses is much closer to certainty than the loss probability of the individual players is. The group becomes in essence a single Method 2 player using a much larger number of pulls. Thus, the win probability of an individual player trends to zero with increased pulls.
  5. Since (2) mimics Method 1, and (4) mimics Method 2, it seems justifiable to say that the win probability for an individual Method-1 player trends to unity, while the win probability for an individual Method-2 player trends to zero, even though that probability multiplied by his likely payoff (if he wins) is greater than his original stake. (In the sense that it is justifiable to say that an individual $1 billion lottery ticket that has a one-in-a-million chance of winning is almost certainly a losing ticket, and that the statement becomes stronger if both the winnings and odds are escalated by a factor 1000.)
0

Share this post


Link to post
Share on other sites

Posted · Report post

#5 is factually true but I disagree with its implications, mainly because the OP doesn't say you have to play a Large number of times. Simulations show that the odds of winning after 1000 plays are not too far from 50/50 (roughly 45%), adding another couple of lines to the code showed the odds of winning >$1000 are roughly 17%, and you have at least a 1/1000 chance of winning a bankroll that has to be expressed in scientific notation. I'd consider that favorable for a $1 wager.

0

Share this post


Link to post
Share on other sites

Posted · Report post

I have no idea who to give credit to for this one. :) team victory maybe??

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.