Jump to content
BrainDen.com - Brain Teasers
  • 0

Comparing gambling systems


BMAD
 Share

Question

Consider a gambling machine A. When you put in $X and pull the handle, it will spit out (equally likely) either $0.7*X, $0.8*X, $0.9*X, $1.1*X, $1.2*X, or $1.5*X.
Now consider the following two ways of playing this machine:
Put in $1, pull the handle, and keep whatever you get. Repeat.
Initially, put in $1. Pull the handle, then put in whatever you get. Repeat.
Can you win money with this machine? Which is the better way to play? How can this be?
Link to comment
Share on other sites

Recommended Posts

  • 0

The average payoff being (0.7+0.8+0.9+1.1+1.2+1.5)/6 = 1.03333...


So when betting $1 at a time you make on average $0.0333... per each game.
When you bet all your winnings each time, your money grows exponentially with the factor of 1.0333..., which should bankrupt the casino very quickly. (Starting with $1, after just 422 games you should expect to have over million $$.)
To convince yourself the exponential formula is applicable, calculate in a spreadsheet all 36 possibilities for just two turns, add them up and divide by 36: (0.7*0.7+0.7*0.8+...+1.5*1.2+1.5*1.5)/36 = 1.067777..., which coincides with (1.033...)2 = 1.06777...
Edited by Prime
Link to comment
Share on other sites

  • 0

I really should know better than to chime in here -

seems to me with the first strategy you would loose $.30 one sixth of the time, $.20 one sixth of the time, $.10 one sixth of the time; and win $.10 one sixth of the time, $.20 one sixth of the time, and $.50 one sixth of the time. For a net gain of $.20 in six trials or $.0333/trial. but for the second strategy using a similar methodology, wouldn't that be a wash? Or perhaps since you start with one dollar on your initial pull of the handle you get the same odds as the first strategy (+$.0333) and even money after that. So the two strategies yield the same expected profit?

EDIT: I knew I shouldn't have tried to play with the big boys. See where I erred now and am with Prime.

Link to comment
Share on other sites

  • 0

The six equally likely payoffs are .7 .8 .9 1.1 1.2 1.5.

Their arithmetic mean AM is 1.033333 ...

Their geometric mean GM is 0.9996530325.

In Method 1 you pay $1.00 for each pull and remove your winnings.

In Method 2 you pay $1.00 once and keep your stake at risk. Thus,

AM (>1) gives you the expected return per dollar per pull for Method 1. (Winnings are added.)

GM (<1) gives you the expected return per dollar per pull for Method 2. (Winnings are multiplied.)

Post 3 paid $1.00, pulled the handle twice and then removed the winnings.
This was done 36 times with the results averaged. This is not Method 2.

It's actually Method 1 for a new set of payoffs: the 36 pairwise products of the original six payoffs.

This is a winning game, with an average return on a dollar bet of AM2.

AM2 is in fact the arithmetic mean (AM) of the new set of payoffs.

So this is one more example of a winning Method 1 result.

To test Method 2 for 72 pulls of the handle, you bet $1.00 once and keep the result at risk for all 72 pulls.

That final result is the just product of the 36 pairwise products, which is 0.9753235719 = GM72.

And that is the expected result for Method 2: each pull multiplies your stake by GM.

Again, why does Method 1 win and Method 2 lose?

Because AM applies to added winnings and AM>1; GM applies to multiplied winnings and GM<1.

Simulation: Pull the handle 100,000 times with random results:

  1. Method 1:
    Average return on each $1.00 bet is 1.032955 <--> AM = 1.0333 ...

  2. Method 2:
    Stake goes from $1.00 to $1.414863x10-31 = GM100000.
    The 100,000th root of 1.414863x10-31 is 0.9992899. <--> GM = 0.9996530325.

Link to comment
Share on other sites

  • 0

hmmm

okay bonanova.

consider the following.

bet 1 dollar the first round.

bet 1.032955 the second round.

what would be my expected winnings after round 2?

How you describe going from the initial $1 to the (expected) 1.033333 after the first pull matters.

It should be thought of as adding $0.03333 ... to your stake.

It should not be thought of as multiplying your stake by 1.0333 ...

In Method 1 (single pull) you add the arithmetic mean of the payoffs (minus the $1 bet) each time.

With each pull, you subtract $0.30, $0.20, $0.10, or add $0.10, $0.20 or $0.50 to your stake.

That is all that ever happens in Method 1.

So the average of these results is your expected winnings on each pull.

Method 2 does not add to or subtract from your stake: it multiplies your stake.

You don't get to protect your winnings from the effects of a later bad payoff.

Let's see how that matters. Suppose you get 1.1, 1.1, 1.1, and then .7.

In Method 1, you'd win $0.30 then lose $0.30, breaking even.

In Method 2, your stake is multiplied by 1.13 ($1.331) then by .7 ($0.9317), losing $0.0683.

In Method 1, the .7 payoff loses you only $0.30.

In Method 2, the .7 payoff loses you 30% of your entire stake.

You expect to multiply your stake by the product of the six payoffs every six pulls.

Another way of saying you expect to multiply your stake by GM each pull.

Since GM < 1, you expect to lose.

Let's make it really dramatic and add payoffs of 0 and 10 to the mix.

The arithmetic mean of the payoffs is now 2.025.

Does that give Method 2 the expectation of doubling your stake on each pull?

No. Because you eventually get a 0. What happens then?

Method 1: you lose $1. No big deal.

Next pull might be 10, and you more than make up for the loss.

Method 2: you lose your entire stake. That is a big deal.

Getting a 10 on the next 100 pulls won't make up for that. Once you're broke, you stay broke.

This makes it clear that the arithmetic mean does not apply to Method 2.

Adding a 0 to the payoffs makes GM = 0, and that is now your expectation for Method 2.

If the payoffs in the OP were changed slightly by replacing 1.5 by 1.6 then GM ~ 1.0105.

With those payoffs, both methods win.

Link to comment
Share on other sites

  • 0

This example illustrates the difference between expected value and expected outcome. The expected value is +X/30, where X is the amount you put into the machine. The expected outcome of method 2 is something quite different though. Over the course of six games, you can expect one of each result, netting you 0.7*0.8*0.9*1.1*1.2*1.5*X - X = -0.00208X. Hence, the expected outcome is negative, but the expected value is positive. This might seem counterintuitive but it is not surprising, considering that you can never lose even one full dollar, while your potential winnings are unlimited.

Suppose you play using method 2 for a very long time, starting with one dollar. What can you expect? Say you play N times, and let E>0 (small number). The probability that your bankroll is less than E approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly lose almost the whole dollar.

Suppose you try to play using method 2 until you have won 1 million dollars, and then you stop. Once again you're out of luck (unless you're in a whole lot of luck). The probability that your bankroll will ever exceed M approaches 0 as M approaches infinity. For M = $1,000,000, it is very unlikely that you will ever win that much.

On the other hand, suppose we play using method 1 for a very long time, one dollar every game. What can you expect? Say you play N times, and let W>0 (big number). The probability that your winnings exceed W approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly win as much as you want.

So why will you get rich by method 1 but not by method 2? Because in method 1 your bankroll is infinite, while in method 2 your bankroll is a measly $1. In method 1, no matter how much you might lose in the short run, you will always be able to bet another dollar for the next game. This means you were already rich from the start, you have an infinite bankroll, and winning a finite amount of money is easy. In method 2, no matter how much you might lose in the short run, you will never bring in any outside money for the next game. This means you have a $1 bankroll, and winning a large amount of money is hard.

The more you win using method 2, the more you stand to lose (as your bet goes up). But the more you lose, the less you stand to win (as your bet goes down).

What of Prime's argument about your money growing exponentially with the factor of 1.0333... each game? If we were to actually increase our bet by that factor with each consecutive game, regardless of how the game went, then his argument would hold true. But then again we would have an infinite bankroll, as we would be able to compensate for any losses using outside money. In method 2, whenever we lose our next bet will be smaller, and it will be harder for us to make up for that loss. But whenever we win our next bet will be larger, and it will be easier for us to lose those winnings.

So after 422 games, the expected value is over +1,000,000 dollars. But we can not expect to have nearly that much. We can expect to have (0.7*0.8*0.9*1.1*1.2*1.5)422/6 dollars, which is about 86 cents, because that is the expected outcome. This is an important concept for any gambler to be aware of. You can never expect to gain your full expected value. The expected outcome is always smaller than the expected value, and the reason for this is your limited bankroll. If you are only looking to maximize expected value, you will bet your entire bankroll on every favorable game you come across (higher bet equals higher EV), and inevitably lose all your money at some point.

Finally, the big question, can you win money using this machine? We must disqualify method 1 because it assumes an infinite bankroll. We can't use method 2 because it expects to lose money. Let's say your bankroll is $1 to start with, and you want it to grow to $1,000,000. Can you do it? Can you make a poor man rich? The answer is yes. Rather than throwing in your whole bankroll each time, suppose you throw in a fraction of it. Let that fraction be k. We have 0<k<1. You have 1-k of your bankroll left over on the side. The possible returns are 0.7k, 0.8k, 0.9k, 1.1k, 1.2k, and 1.5k. Your new bankroll will be either 1-0.3k, 1-0.2k, 1-0.1k, 1+0.1k, 1+0.2k, or 1+0.5k. The expected outcome over six games is (1-0.3k)(1-0.2k)(1-0.1k)(1+0.1k)(1+0.2k)(1+0.5k), which has a local maximum of ~1.0492607 at k~0.491298 (courtesy of Wolfram|Alpha). A good approximation for the optimal strategy is to bet half your bankroll each game, for an expected bankroll growth factor of a little over 1.049 per six games. After no more than 1728 games you can truly expect to be a millionnaire.

Link to comment
Share on other sites

  • 0

Consider a gambling machine A. When you put in $X and pull the handle, it will spit out (equally likely) either $0.7*X, $0.8*X, $0.9*X, $1.1*X, $1.2*X, or $1.5*X.

Now consider the following two ways of playing this machine:

Put in $1, pull the handle, and keep whatever you get. Repeat.

Initially, put in $1. Pull the handle, then put in whatever you get. Repeat.

Can you win money with this machine? Which is the better way to play? How can this be?

The first two answers have been given (post 2) and verified (post 5) by a 100,000-pull simulation.

The discussion question elicited several thoughts (posts 5 7 8) that made the answers intuitive.

Finally, the answers can be changed (post 7) by changing the payoffs.

Link to comment
Share on other sites

  • 0

The six equally likely payoffs are .7 .8 .9 1.1 1.2 1.5.

Their arithmetic mean AM is 1.033333 ...

Their geometric mean GM is 0.9996530325.

In Method 1 you pay $1.00 for each pull and remove your winnings.

In Method 2 you pay $1.00 once and keep your stake at risk. Thus,

AM (>1) gives you the expected return per dollar per pull for Method 1. (Winnings are added.)

GM (<1) gives you the expected return per dollar per pull for Method 2. (Winnings are multiplied.)

Post 3 paid $1.00, pulled the handle twice and then removed the winnings.

This was done 36 times with the results averaged. This is not Method 2.

That's not what I said in Post 3.

It's actually Method 1 for a new set of payoffs: the 36 pairwise products of the original six payoffs.

No, it is not Method 1. On the second turn the entire bankroll is staked -- not the original betting amount.

This is a winning game, with an average return on a dollar bet of AM2.

AM2 is in fact the arithmetic mean (AM) of the new set of payoffs.

So this is one more example of a winning Method 1 result.

To test Method 2 for 72 pulls of the handle, you bet $1.00 once and keep the result at risk for all 72 pulls.

That final result is the just product of the 36 pairwise products, which is 0.9753235719 = GM72.

And that is the expected result for Method 2: each pull multiplies your stake by GM.

Again, why does Method 1 win and Method 2 lose?

Because AM applies to added winnings and AM>1; GM applies to multiplied winnings and GM<1.

Simulation: Pull the handle 100,000 times with random results:

  • Method 1:

    Average return on each $1.00 bet is 1.032955 <--> AM = 1.0333 ...

  • Method 2:

    Stake goes from $1.00 to $1.414863x10-31 = GM100000.

    The 100,000th root of 1.414863x10-31 is 0.9992899. <--> GM = 0.9996530325.

There is an interesting thought provoking point there, presenting a kind of pseudo-paradox in this problem.

I stand firmly by my solution in the post 3, though I am beginning to have some doubts about its straightforwardness property. The illustration I gave in support of the solution was not a proof. And since bonanova has misinterpreted my illustration, it must have been unintelligible. (It happens to me from time to time. In my head a statement I make seems precise and clear, but other people can't make out the ends of it.)

So in the interest of befuddlement, perplexity, and creative thought...

A fair white colored die may be adopted for playing this game. All one needs to do is to ignore the numbers engraved on it and write with a permanent marker the numbers 0.7, 0.8, 0.9, 1.1, 1.2, and 1.5 on its six sides.

Since we all have agreed that the method 1, i.e. betting a fixed amount, yields an average factor of F = ( 0.7+ 0.8 + 0.9 + 1.1 + 1.2 + 1.5)/6 = 1.0333... on each bet, we can concentrate on the method 2, where we stake our entire proceeds from previous bets on each next turn.

A product of all die rolls certainly suggests itself.

Starting with an initial amount of A and having rolled r1, r2, …, rN, we end up with the final payoff P = A* r1* r2* …*rN. For example, starting with $1 and rolling 1.1, 1.1, and 0.7, we end with $1*1.1*1.1*0.7 = $0.847. (Let's always use $1 as an initial bet to dispense with the variable A.)

PERSPECTIVE 1

Suppose, I had plenty of free time and went on a long gambling streak rolling the die N times, where N is a very large number. Since all 6 die sides are equally probable, there would be approximately N/6 of each of the 6 possible values of a die roll. Then employing the commutative property of multiplication we could regroup the factors when calculating the final payoff like so:

P = ( 0.7*0.8*0.9*1.1*1.2*1.5)*( 0.7*0.8*0.9*1.1*1.2*1.5)*... = ( 0.7*0.8*0.9*1.1*1.2*1.5)N/6.

With a large N the deviation from this formula would seem insignificant. Noting that ( 0.7*0.8*0.9*1.1*1.2*1.5) = 0.99792, that would make a payoff P for a large N tend to zero.

PERSPECTIVE 2

N consecutive rolls of a die generate T = 6N total possible variations. The Average Payoff PN for N consecutive rolls of the die is defined as:

PN = S1*W1 + S2*W2 + ... + ST*WT, where each Si is a payoff for a single variation of N die rolls, and Wi is the probability of that variation. In this case each W is equal to 1/T = 1/ 6N.

Note: Si is not a payoff for some random experimentally obtained string of N rolls of dice, but rather a payoff for an individual theoretical possibility of such string. The possible payoffs range from 0.7N to 1.5N . Whereas each of those boundary values has only one possible string of N die rolls, most of the final payoff values in between may be obtained by many different strings of N die rolls.

Let's define the average factor for our game as F = ( 0.7+ 0.8 + 0.9 + 1.1 + 1.2 + 1.5)/6 = 1.0333... .

Theorem: For N consecutive rolls the Average Payoff PN = FN.

Induction Proof

1). For N = 1 the rule holds, namely P1 = F1= 1.0333...

2). Suppose the rule holds for some N >=1, namely PN = FN. Let's prove that then it is also true for N+1.

The total number of variations for N+1 rolls is 6N+1. The collection of all variations may be obtained by multiplying each of the variations of the N-roll set by 6 possible values of a die roll. Thus the Average Payoff for N+1 rolls becomes:

PN+1 = S1*0.7/6N+1 + S1*0.8/6N+1 + S1*0.9/6N+1 + S1*1.1/6N+1 + S1*1.2/ 6N+1 + S1*1.5/6N+1 + S2*0.7/6N+1 + ... = S1*(0.7+0.8+0.9+1.1+1.2+1.5)/6N+1 + S2*(0.7+0.8+0.9+1.1+1.2+1.5)/6N+1 + ... = (S1/6N)*F + (S2/6N)*F+ … + (ST/6N)*F = F*(S1/6N+ S2/6N+ … + ST/6N) = F*PN = F* FN = FN+1

QED.

SIMULATION

When running a simulation, keep in mind that number representations in a computer have a limited precision, 18 digits or so, depending on data type. Whereas, 72-roll string yields a payoff value with 72 digits past the decimal point and possibly a lot of digits before (e.g. 1.572). Truncating the interim values would generate a huge error rendering such simulation grossly inaccurate. I suggest, use a smaller string of die rolls, like 12 or so.

For 12 consecutive rolls, theoretical average payoff is (1.0333...)12 = 1.4821264897.

I wrote a simple computer simulation and ran 1,000,000 sets of 12 consecutive die rolls. The average of the 1,000,000 payoffs came to 1.484941558, which is slightly higher than theoretical, but nowhere near the theoretical of 0.995844326 for the PERSPECTIVE 1.

PERSPECTIVE 1 seems so plausible, and yet it is a false method for calculation of Average Payoff. Why? I think, simply, the expected string of N rolls of a die is not representative of the Average Payoff. In the example of 12 consecutive rolls, the expected string of 12 rolls would be the one with 2 of each of our six possible values. The number of ways to construct such strings is 12!/(2!)6 = 7,484,400. That is a comparatively small number constituting approximately 0.34% of the total number of variations of 612.

For anyone who is still not convinced, I am ready to play the game with a fair die, staking my entire bankroll on each roll of the die. (To speed up the process, we could roll 12 dice at a time.)

Edited by Prime
Link to comment
Share on other sites

  • 0

This example illustrates the difference between expected

value and expected outcome. The expected value is +X/30, where X is the amount you put into the machine. The expected outcome of method 2 is something quite different though. Over the course of six games, you can expect one of each result, netting you 0.7*0.8*0.9*1.1*1.2*1.5*X - X = -0.00208X. Hence, the expected outcome is negative, but the expected value is positive. This might seem counterintuitive but it is not surprising, considering that you can never lose even one full dollar, while your potential winnings are unlimited.

Suppose you play using method 2 for a very long time, starting with one dollar. What can you expect? Say you play N times, and let E>0 (small number). The probability that your bankroll is less than E approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly lose almost the whole dollar.

Suppose you try to play using method 2 until you have won 1 million dollars, and then you stop. Once again you're out of luck (unless you're in a whole lot of luck). The probability that your bankroll will ever exceed M approaches 0 as M approaches infinity. For M = $1,000,000, it is very unlikely that you will ever win that much.

On the other hand, suppose we play using method 1 for a very long time, one dollar every game. What can you expect? Say you play N times, and let W>0 (big number). The probability that your winnings exceed W approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly win as much as you want.

So why will you get rich by method 1 but not by method 2? Because in method 1 your bankroll is infinite, while in method 2 your bankroll is a measly $1. In method 1, no matter how much you might lose in the short run, you will always be able to bet another dollar for the next game. This means you were already rich from the start, you have an infinite bankroll, and winning a finite amount of money is easy. In method 2, no matter how much you might lose in the short run, you will never bring in any outside money for the next game. This means you have a $1 bankroll, and winning a large amount of money is hard.

The more you win using method 2, the more you stand to lose (as your bet goes up). But the more you lose, the less you stand to win (as your bet goes down).

What of Prime's argument about your money growing exponentially with the factor of 1.0333... each game? If we were to actually increase our bet by that factor with each consecutive game, regardless of how the game went, then his argument would hold true. But then again we would have an infinite bankroll, as we would be able to compensate for any losses using outside money. In method 2, whenever we lose our next bet will be smaller, and it will be harder for us to make up for that loss. But whenever we win our next bet will be larger, and it will be easier for us to lose those winnings.

So after 422 games, the expected value is over +1,000,000 dollars. But we can not expect to have nearly that much. We can expect to have (0.7*0.8*0.9*1.1*1.2*1.5)422/6 dollars, which is about 86 cents, because that is the expected outcome. This is an important concept for any gambler to be aware of. You can never expect to gain your full expected value. The expected outcome is always smaller than the expected value, and the reason for this is your limited bankroll. If you are only looking to maximize expected value, you will bet your entire bankroll on every favorable game you come across (higher bet equals higher EV), and inevitably lose all your money at some point.

Finally, the big question, can you win money using this machine? We must disqualify method 1 because it assumes an infinite bankroll. We can't use method 2 because it expects to lose money. Let's say your bankroll is $1 to start with, and you want it to grow to $1,000,000. Can you do it? Can you make a poor man rich? The answer is yes. Rather than throwing in your whole bankroll each time, suppose you throw in a fraction of it. Let that fraction be k. We have 0<k<1. You have 1-k of your bankroll left over on the side. The possible returns are 0.7k, 0.8k, 0.9k, 1.1k, 1.2k, and 1.5k. Your new bankroll will be either 1-0.3k, 1-0.2k, 1-0.1k, 1+0.1k, 1+0.2k, or 1+0.5k. The expected outcome over six games is (1-0.3k)(1-0.2k)(1-0.1k)(1+0.1k)(1+0.2k)(1+0.5k), which has a local maximum of ~1.0492607 at k~0.491298 (courtesy of Wolfram|Alpha). A good approximation for the optimal strategy is to bet half your bankroll each game, for an expected bankroll growth factor of a little over 1.049 per six games. After no more than 1728 games you can truly expect to be a millionnaire.

Do you need an infinite stake to play Method 1, as you would in a Martingale scenario?

Can't you feed in dollar bills until your stake exceeds a certain amount, then play with house money?

After your stake reaches $2 you're basically starting over, with a free dollar to use.

Link to comment
Share on other sites

  • 0

The six equally likely payoffs are .7 .8 .9 1.1 1.2 1.5.Their arithmetic mean AM is 1.033333 ...Their geometric mean GM is 0.9996530325.

In Method 1 you pay $1.00 for each pull and remove your winnings.In Method 2 you pay $1.00 once and keep your stake at risk. Thus,

AM (>1) gives you the expected return per dollar per pull for Method 1. (Winnings are added.)GM (<1) gives you the expected return per dollar per pull for Method 2. (Winnings are multiplied.)

Post 3 paid $1.00, pulled the handle twice and then removed the winnings.

This was done 36 times with the results averaged. This is not Method 2.

That's not what I said in Post 3.

It's actually Method 1 for a new set of payoffs: the 36 pairwise products of the original six payoffs.

No, it is not Method 1. On the second turn the entire bankroll is staked -- not the original betting amount.This is a winning game, with an average return on a dollar bet of AM2.AM2 is in fact the arithmetic mean (AM) of the new set of payoffs.So this is one more example of a winning Method 1 result.

To test Method 2 for 72 pulls of the handle, you bet $1.00 once and keep the result at risk for all 72 pulls.That final result is the just product of the 36 pairwise products, which is 0.9753235719 = GM72.And that is the expected result for Method 2: each pull multiplies your stake by GM.

Again, why does Method 1 win and Method 2 lose? Because AM applies to added winnings and AM>1; GM applies to multiplied winnings and GM<1.

Simulation: Pull the handle 100,000 times with random results:

  • Method 1:

    Average return on each $1.00 bet is 1.032955 <--> AM = 1.0333 ...

  • Method 2:

    Stake goes from $1.00 to $1.414863x10-31 = GM100000.

    The 100,000th root of 1.414863x10-31 is 0.9992899. <--> GM = 0.9996530325.

There is an interesting thought provoking point there, presenting a kind of pseudo-paradox in this problem.I stand firmly by my solution in the post 3, though I am beginning to have some doubts about its straightforwardness property. The illustration I gave in support of the solution was not a proof. And since bonanova has misinterpreted my illustration, it must have been unintelligible. (It happens to me from time to time. In my head a statement I make seems precise and clear, but other people can't make out the ends of it.)

So in the interest of befuddlement, perplexity, and creative thought...

A fair white colored die may be adopted for playing this game. All one needs to do is to ignore the numbers engraved on it and write with a permanent marker the numbers 0.7, 0.8, 0.9, 1.1, 1.2, and 1.5 on its six sides.

Since we all have agreed that the method 1, i.e. betting a fixed amount, yields an average factor of F = ( 0.7+ 0.8 + 0.9 + 1.1 + 1.2 + 1.5)/6 = 1.0333... on each bet, we can concentrate on the method 2, where we stake our entire proceeds from previous bets on each next turn.

A product of all die rolls certainly suggests itself.

Starting with an initial amount of A and having rolled r1, r2, …, rN, we end up with the final payoff P = A* r1* r2* …*rN. For example, starting with $1 and rolling 1.1, 1.1, and 0.7, we end with $1*1.1*1.1*0.7 = $0.847. (Let's always use $1 as an initial bet to dispense with the variable A.)PERSPECTIVE 1

Suppose, I had plenty of free time and went on a long gambling streak rolling the die N times, where N is a very large number. Since all 6 die sides are equally probable, there would be approximately N/6 of each of the 6 possible values of a die roll. Then employing the commutative property of multiplication we could regroup the factors when calculating the final payoff like so:P = ( 0.7*0.8*0.9*1.1*1.2*1.5)*( 0.7*0.8*0.9*1.1*1.2*1.5)*... = ( 0.7*0.8*0.9*1.1*1.2*1.5)N/6.

With a large N the deviation from this formula would seem insignificant. Noting that ( 0.7*0.8*0.9*1.1*1.2*1.5) = 0.99792, that would make a payoff P for a large N tend to zero.PERSPECTIVE 2N consecutive rolls of a die generate T = 6N total possible variations. The Average Payoff PN for N consecutive rolls of the die is defined as:PN = S1*W1 + S2*W2 + ... + ST*WT, where each Si is a payoff for a single variation of N die rolls, and Wi is the probability of that variation. In this case each W is equal to 1/T = 1/ 6N.

Note: Si is not a payoff for some random experimentally obtained string of N rolls of dice, but rather a payoff for an individual theoretical possibility of such string. The possible payoffs range from 0.7N to 1.5N . Whereas each of those boundary values has only one possible string of N die rolls, most of the final payoff values in between may be obtained by many different strings of N die rolls.

Let's define the average factor for our game as F = ( 0.7+ 0.8 + 0.9 + 1.1 + 1.2 + 1.5)/6 = 1.0333... .Theorem: For N consecutive rolls the Average Payoff PN = FN.Induction Proof

1). For N = 1 the rule holds, namely P1 = F1= 1.0333...

2). Suppose the rule holds for some N >=1, namely PN = FN. Let's prove that then it is also true for N+1.

The total number of variations for N+1 rolls is 6N+1. The collection of all variations may be obtained by multiplying each of the variations of the N-roll set by 6 possible values of a die roll. Thus the Average Payoff for N+1 rolls becomes:

PN+1 = S1*0.7/6N+1 + S1*0.8/6N+1 + S1*0.9/6N+1 + S1*1.1/6N+1 + S1*1.2/ 6N+1 + S1*1.5/6N+1 + S2*0.7/6N+1 + ... = S1*(0.7+0.8+0.9+1.1+1.2+1.5)/6N+1 + S2*(0.7+0.8+0.9+1.1+1.2+1.5)/6N+1 + ... = (S1/6N)*F + (S2/6N)*F+ … + (ST/6N)*F = F*(S1/6N+ S2/6N+ … + ST/6N) = F*PN = F* FN = FN+1

QED.SIMULATION

When running a simulation, keep in mind that number representations in a computer have a limited precision, 18 digits or so, depending on data type. Whereas, 72-roll string yields a payoff value with 72 digits past the decimal point and possibly a lot of digits before (e.g. 1.572). Truncating the interim values would generate a huge error rendering such simulation grossly inaccurate. I suggest, use a smaller string of die rolls, like 12 or so.

For 12 consecutive rolls, theoretical average payoff is (1.0333...)12 = 1.4821264897.

I wrote a simple computer simulation and ran 1,000,000 sets of 12 consecutive die rolls. The average of the 1,000,000 payoffs came to 1.484941558, which is slightly higher than theoretical, but nowhere near the theoretical of 0.995844326 for the PERSPECTIVE 1.PERSPECTIVE 1 seems so plausible, and yet it is a false method for calculation of Average Payoff. Why? I think, simply, the expected string of N rolls of a die is not representative of the Average Payoff. In the example of 12 consecutive rolls, the expected string of 12 rolls would be the one with 2 of each of our six possible values. The number of ways to construct such strings is 12!/(2!)6 = 7,484,400. That is a comparatively small number constituting approximately 0.34% of the total number of variations of 612.

For anyone who is still not convinced, I am ready to play the game with a fair die, staking my entire bankroll on each roll of the die. (To speed up the process, we could roll 12 dice at a time.)

It seemed clear what I implied about the pairwise case, but maybe it wasn't.

On the second pull the entire stake is at risk. Correct.

You subject your $1 to two pulls rather than to a single pull.

What I did not say explicitly was that's equivalent to a single pull with a payoff equal to the product of the two payoffs.

Then after the second pull you withdraw your winnings. They are never again at risk.

What I said was that is equivalent to method 1, just using a different set of payoffs.

You had 72 pulls, but you didn't bet just $1. You bet $36.

To represent method 2, you can't put into play a fresh $1, and take out your winnings, every other pull

You must leave your stake at risk for all 72 pulls.

If you do, eventually the .7s will bring you down.

Link to comment
Share on other sites

  • 0

This example illustrates the difference between expected

value and expected outcome. The expected value is +X/30, where X is the amount you put into the machine. The expected outcome of method 2 is something quite different though. Over the course of six games, you can expect one of each result, netting you 0.7*0.8*0.9*1.1*1.2*1.5*X - X = -0.00208X. Hence, the expected outcome is negative, but the expected value is positive. This might seem counterintuitive but it is not surprising, considering that you can never lose even one full dollar, while your potential winnings are unlimited.

Suppose you play using method 2 for a very long time, starting with one dollar. What can you expect? Say you play N times, and let E>0 (small number). The probability that your bankroll is less than E approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly lose almost the whole dollar.

Suppose you try to play using method 2 until you have won 1 million dollars, and then you stop. Once again you're out of luck (unless you're in a whole lot of luck). The probability that your bankroll will ever exceed M approaches 0 as M approaches infinity. For M = $1,000,000, it is very unlikely that you will ever win that much.

On the other hand, suppose we play using method 1 for a very long time, one dollar every game. What can you expect? Say you play N times, and let W>0 (big number). The probability that your winnings exceed W approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly win as much as you want.

So why will you get rich by method 1 but not by method 2? Because in method 1 your bankroll is infinite, while in method 2 your bankroll is a measly $1. In method 1, no matter how much you might lose in the short run, you will always be able to bet another dollar for the next game. This means you were already rich from the start, you have an infinite bankroll, and winning a finite amount of money is easy. In method 2, no matter how much you might lose in the short run, you will never bring in any outside money for the next game. This means you have a $1 bankroll, and winning a large amount of money is hard.

The more you win using method 2, the more you stand to lose (as your bet goes up). But the more you lose, the less you stand to win (as your bet goes down).

What of Prime's argument about your money growing exponentially with the factor of 1.0333... each game? If we were to actually increase our bet by that factor with each consecutive game, regardless of how the game went, then his argument would hold true. But then again we would have an infinite bankroll, as we would be able to compensate for any losses using outside money. In method 2, whenever we lose our next bet will be smaller, and it will be harder for us to make up for that loss. But whenever we win our next bet will be larger, and it will be easier for us to lose those winnings.

So after 422 games, the expected value is over +1,000,000 dollars. But we can not expect to have nearly that much. We can expect to have (0.7*0.8*0.9*1.1*1.2*1.5)422/6 dollars, which is about 86 cents, because that is the expected outcome. This is an important concept for any gambler to be aware of. You can never expect to gain your full expected value. The expected outcome is always smaller than the expected value, and the reason for this is your limited bankroll. If you are only looking to maximize expected value, you will bet your entire bankroll on every favorable game you come across (higher bet equals higher EV), and inevitably lose all your money at some point.

Finally, the big question, can you win money using this machine? We must disqualify method 1 because it assumes an infinite bankroll. We can't use method 2 because it expects to lose money. Let's say your bankroll is $1 to start with, and you want it to grow to $1,000,000. Can you do it? Can you make a poor man rich? The answer is yes. Rather than throwing in your whole bankroll each time, suppose you throw in a fraction of it. Let that fraction be k. We have 0<k<1. You have 1-k of your bankroll left over on the side. The possible returns are 0.7k, 0.8k, 0.9k, 1.1k, 1.2k, and 1.5k. Your new bankroll will be either 1-0.3k, 1-0.2k, 1-0.1k, 1+0.1k, 1+0.2k, or 1+0.5k. The expected outcome over six games is (1-0.3k)(1-0.2k)(1-0.1k)(1+0.1k)(1+0.2k)(1+0.5k), which has a local maximum of ~1.0492607 at k~0.491298 (courtesy of Wolfram|Alpha). A good approximation for the optimal strategy is to bet half your bankroll each game, for an expected bankroll growth factor of a little over 1.049 per six games. After no more than 1728 games you can truly expect to be a millionnaire.

Yes, there it is -- Expected Value vs. Expected Outcome.

I fancy myself as an avid gambler. And in practice I would choose the mix of the two methods, as Rainman has suggested here. (Playing infinite number of times is rather impractical.)

I have run an experiment playing 422 consecutive rolls 10 times with the following results:

4 times I had less than $0.01 left of my initial $1 bankroll.

3 times I had between $0.02 and $0.35 left.

3 times I won with the largest win of $83.62.

That left me very much ahead (more than $80, while putting at risk just $10).

The new question here is, what is the optimal string of consecutive rolls?

3 times I

Link to comment
Share on other sites

  • 0

....

On the second pull the entire stake is at risk. Correct.

You subject your $1 to two pulls rather than to a single pull.

.....

I was not implying any actual "pulls" in my illustration. Nor did I say anything about withdrawing or adding any amounts to the stake.

I did not mean it as an actual trial in the game. I meant it as the enumeration of all possible variations with their respective probabilities.

Link to comment
Share on other sites

  • 0

This example illustrates the difference between expected

value and expected outcome. The expected value is +X/30, where X is the amount you put into the machine. The expected outcome of method 2 is something quite different though. Over the course of six games, you can expect one of each result, netting you 0.7*0.8*0.9*1.1*1.2*1.5*X - X = -0.00208X. Hence, the expected outcome is negative, but the expected value is positive. This might seem counterintuitive but it is not surprising, considering that you can never lose even one full dollar, while your potential winnings are unlimited.

Suppose you play using method 2 for a very long time, starting with one dollar. What can you expect? Say you play N times, and let E>0 (small number). The probability that your bankroll is less than E approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly lose almost the whole dollar.

Suppose you try to play using method 2 until you have won 1 million dollars, and then you stop. Once again you're out of luck (unless you're in a whole lot of luck). The probability that your bankroll will ever exceed M approaches 0 as M approaches infinity. For M = $1,000,000, it is very unlikely that you will ever win that much.

On the other hand, suppose we play using method 1 for a very long time, one dollar every game. What can you expect? Say you play N times, and let W>0 (big number). The probability that your winnings exceed W approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly win as much as you want.

So why will you get rich by method 1 but not by method 2? Because in method 1 your bankroll is infinite, while in method 2 your bankroll is a measly $1. In method 1, no matter how much you might lose in the short run, you will always be able to bet another dollar for the next game. This means you were already rich from the start, you have an infinite bankroll, and winning a finite amount of money is easy. In method 2, no matter how much you might lose in the short run, you will never bring in any outside money for the next game. This means you have a $1 bankroll, and winning a large amount of money is hard.

The more you win using method 2, the more you stand to lose (as your bet goes up). But the more you lose, the less you stand to win (as your bet goes down).

What of Prime's argument about your money growing exponentially with the factor of 1.0333... each game? If we were to actually increase our bet by that factor with each consecutive game, regardless of how the game went, then his argument would hold true. But then again we would have an infinite bankroll, as we would be able to compensate for any losses using outside money. In method 2, whenever we lose our next bet will be smaller, and it will be harder for us to make up for that loss. But whenever we win our next bet will be larger, and it will be easier for us to lose those winnings.

So after 422 games, the expected value is over +1,000,000 dollars. But we can not expect to have nearly that much. We can expect to have (0.7*0.8*0.9*1.1*1.2*1.5)422/6 dollars, which is about 86 cents, because that is the expected outcome. This is an important concept for any gambler to be aware of. You can never expect to gain your full expected value. The expected outcome is always smaller than the expected value, and the reason for this is your limited bankroll. If you are only looking to maximize expected value, you will bet your entire bankroll on every favorable game you come across (higher bet equals higher EV), and inevitably lose all your money at some point.

Finally, the big question, can you win money using this machine? We must disqualify method 1 because it assumes an infinite bankroll. We can't use method 2 because it expects to lose money. Let's say your bankroll is $1 to start with, and you want it to grow to $1,000,000. Can you do it? Can you make a poor man rich? The answer is yes. Rather than throwing in your whole bankroll each time, suppose you throw in a fraction of it. Let that fraction be k. We have 0<k<1. You have 1-k of your bankroll left over on the side. The possible returns are 0.7k, 0.8k, 0.9k, 1.1k, 1.2k, and 1.5k. Your new bankroll will be either 1-0.3k, 1-0.2k, 1-0.1k, 1+0.1k, 1+0.2k, or 1+0.5k. The expected outcome over six games is (1-0.3k)(1-0.2k)(1-0.1k)(1+0.1k)(1+0.2k)(1+0.5k), which has a local maximum of ~1.0492607 at k~0.491298 (courtesy of Wolfram|Alpha). A good approximation for the optimal strategy is to bet half your bankroll each game, for an expected bankroll growth factor of a little over 1.049 per six games. After no more than 1728 games you can truly expect to be a millionnaire.

Yes, there it is -- Expected Value vs. Expected Outcome.

I fancy myself as an avid gambler. And in practice I would choose the mix of the two methods, as Rainman has suggested here. (Playing infinite number of times is rather impractical.)

I have run an experiment playing 422 consecutive rolls 10 times with the following results:

4 times I had less than $0.01 left of my initial $1 bankroll.

3 times I had between $0.02 and $0.35 left.

3 times I won with the largest win of $83.62.

That left me very much ahead (more than $80, while putting at risk just $10).

The new question here is, what is the optimal string of consecutive rolls?

3 times I

It seems your post was cut off unfinished.

Expected outcome is the true value for any gambler who seeks to eliminate the "gambling" part. The probability is 1 that your own outcome approaches the expected outcome as the number of games approaches infinity. So in the long run, you are practically guaranteed to win if your expected outcome is positive. The same is not true for expected value.

Expected value is too influenced by the extremely high payoffs in the extremely unlikely variations. In this case, if you play for example 100 times, the extremely lucky variation where you hit 1.5 every time would yield a net payoff of roughly +$406,561,177,535,215,236. On the other hand, the extremely unlucky variation where you hit 0.7 every time would yield a net payoff of roughly -$1. The average net payoff, or expected value, would be (31/30)100-1, or roughly $27. So the variance (actual payoff - average payoff) is 406,561,177,535,215,209 for the luckiest variation and only -28 for the unluckiest variation. As follows, the expected value is extremely tilted by the high variance from impossibly lucky scenarios. You would need to be insanely lucky just to get anywhere close to the EV.

Your own experiments illustrate this perfectly. The average net payoff for running 422 consecutive games 10 times is 10*(31/30)422-1, or roughly $10,220,338. Your actual net payoff was just more than $80, falling way short of the EV. Had you kept your entire bankroll running for all 4220 games, you would have lost almost everything. This was not a case of bad luck, but rather quite expected.

Had you instead bet half your bankroll every time, with a starting bankroll of the same $10, for 4220 games, your expected outcome would have been ~10*1.049244220/6, or +$4,800,000,000,000,000. Feel free to simulate it, you will end up somewhere around that number. Your EV would of course be even higher, 10*(61/60)4220-1 or roughly +$2,000,000,000,000,000,000,000,000,000,000.

Link to comment
Share on other sites

  • 0

....

On the second pull the entire stake is at risk. Correct.

You subject your $1 to two pulls rather than to a single pull.

.....

I was not implying any actual "pulls" in my illustration. Nor did I say anything about withdrawing or adding any amounts to the stake.

I did not mean it as an actual trial in the game. I meant it as the enumeration of all possible variations with their respective probabilities.

I misunderstood.

I inferred to to be an assertion that Method 2 was a winning strategy.

Link to comment
Share on other sites

  • 0

This example illustrates the difference between expected

value and expected outcome. The expected value is +X/30, where X is the amount you put into the machine. The expected outcome of method 2 is something quite different though. Over the course of six games, you can expect one of each result, netting you 0.7*0.8*0.9*1.1*1.2*1.5*X - X = -0.00208X. Hence, the expected outcome is negative, but the expected value is positive. This might seem counterintuitive but it is not surprising, considering that you can never lose even one full dollar, while your potential winnings are unlimited.

Suppose you play using method 2 for a very long time, starting with one dollar. What can you expect? Say you play N times, and let E>0 (small number). The probability that your bankroll is less than E approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly lose almost the whole dollar.

Suppose you try to play using method 2 until you have won 1 million dollars, and then you stop. Once again you're out of luck (unless you're in a whole lot of luck). The probability that your bankroll will ever exceed M approaches 0 as M approaches infinity. For M = $1,000,000, it is very unlikely that you will ever win that much.

On the other hand, suppose we play using method 1 for a very long time, one dollar every game. What can you expect? Say you play N times, and let W>0 (big number). The probability that your winnings exceed W approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly win as much as you want.

So why will you get rich by method 1 but not by method 2? Because in method 1 your bankroll is infinite, while in method 2 your bankroll is a measly $1. In method 1, no matter how much you might lose in the short run, you will always be able to bet another dollar for the next game. This means you were already rich from the start, you have an infinite bankroll, and winning a finite amount of money is easy. In method 2, no matter how much you might lose in the short run, you will never bring in any outside money for the next game. This means you have a $1 bankroll, and winning a large amount of money is hard.

The more you win using method 2, the more you stand to lose (as your bet goes up). But the more you lose, the less you stand to win (as your bet goes down).

What of Prime's argument about your money growing exponentially with the factor of 1.0333... each game? If we were to actually increase our bet by that factor with each consecutive game, regardless of how the game went, then his argument would hold true. But then again we would have an infinite bankroll, as we would be able to compensate for any losses using outside money. In method 2, whenever we lose our next bet will be smaller, and it will be harder for us to make up for that loss. But whenever we win our next bet will be larger, and it will be easier for us to lose those winnings.

So after 422 games, the expected value is over +1,000,000 dollars. But we can not expect to have nearly that much. We can expect to have (0.7*0.8*0.9*1.1*1.2*1.5)422/6 dollars, which is about 86 cents, because that is the expected outcome. This is an important concept for any gambler to be aware of. You can never expect to gain your full expected value. The expected outcome is always smaller than the expected value, and the reason for this is your limited bankroll. If you are only looking to maximize expected value, you will bet your entire bankroll on every favorable game you come across (higher bet equals higher EV), and inevitably lose all your money at some point.

Finally, the big question, can you win money using this machine? We must disqualify method 1 because it assumes an infinite bankroll. We can't use method 2 because it expects to lose money. Let's say your bankroll is $1 to start with, and you want it to grow to $1,000,000. Can you do it? Can you make a poor man rich? The answer is yes. Rather than throwing in your whole bankroll each time, suppose you throw in a fraction of it. Let that fraction be k. We have 0<k<1. You have 1-k of your bankroll left over on the side. The possible returns are 0.7k, 0.8k, 0.9k, 1.1k, 1.2k, and 1.5k. Your new bankroll will be either 1-0.3k, 1-0.2k, 1-0.1k, 1+0.1k, 1+0.2k, or 1+0.5k. The expected outcome over six games is (1-0.3k)(1-0.2k)(1-0.1k)(1+0.1k)(1+0.2k)(1+0.5k), which has a local maximum of ~1.0492607 at k~0.491298 (courtesy of Wolfram|Alpha). A good approximation for the optimal strategy is to bet half your bankroll each game, for an expected bankroll growth factor of a little over 1.049 per six games. After no more than 1728 games you can truly expect to be a millionnaire.

Do you need an infinite stake to play Method 1, as you would in a Martingale scenario?

Can't you feed in dollar bills until your stake exceeds a certain amount, then play with house money?

After your stake reaches $2 you're basically starting over, with a free dollar to use.

What I meant is, if your bankroll is finite, there is a risk that you will run out of money and be unable to continue with method 1. You could hit a freak streak of consecutive losses. The probability is non-zero so it has to be accounted for, when calculating expected outcome in the long run. With method 1, the expected outcome for N games equals N*31/30 only because your bankroll is assumed to be infinite. With a finite bankroll B, the expected outcome for N games would equal N*31/30 for small values of N, but once B-0.3N<1 the expected outcome would drop below N*31/30. Trying to calculate it exactly with respect to both B and N would be way too much for my brain, and I don't think the formula would be pretty.

Your point is valid though, we might still have a positive expected outcome for method 1, with a large enough bankroll. It could be tested with computer simulations, but I don't have the programming skills to do those. Besides, the expected outcome is at most N*31/30, which is dwarfed by the ~1.049N/6 you would get by betting half your bankroll each time.

Link to comment
Share on other sites

  • 0

This example illustrates the difference between expected

value and expected outcome. The expected value is +X/30, where X is the amount you put into the machine. The expected outcome of method 2 is something quite different though. Over the course of six games, you can expect one of each result, netting you 0.7*0.8*0.9*1.1*1.2*1.5*X - X = -0.00208X. Hence, the expected outcome is negative, but the expected value is positive. This might seem counterintuitive but it is not surprising, considering that you can never lose even one full dollar, while your potential winnings are unlimited.

Suppose you play using method 2 for a very long time, starting with one dollar. What can you expect? Say you play N times, and let E>0 (small number). The probability that your bankroll is less than E approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly lose almost the whole dollar.

Suppose you try to play using method 2 until you have won 1 million dollars, and then you stop. Once again you're out of luck (unless you're in a whole lot of luck). The probability that your bankroll will ever exceed M approaches 0 as M approaches infinity. For M = $1,000,000, it is very unlikely that you will ever win that much.

On the other hand, suppose we play using method 1 for a very long time, one dollar every game. What can you expect? Say you play N times, and let W>0 (big number). The probability that your winnings exceed W approaches 1 as N approaches infinity. In other words, in the long run you will almost certainly win as much as you want.

So why will you get rich by method 1 but not by method 2? Because in method 1 your bankroll is infinite, while in method 2 your bankroll is a measly $1. In method 1, no matter how much you might lose in the short run, you will always be able to bet another dollar for the next game. This means you were already rich from the start, you have an infinite bankroll, and winning a finite amount of money is easy. In method 2, no matter how much you might lose in the short run, you will never bring in any outside money for the next game. This means you have a $1 bankroll, and winning a large amount of money is hard.

The more you win using method 2, the more you stand to lose (as your bet goes up). But the more you lose, the less you stand to win (as your bet goes down).

What of Prime's argument about your money growing exponentially with the factor of 1.0333... each game? If we were to actually increase our bet by that factor with each consecutive game, regardless of how the game went, then his argument would hold true. But then again we would have an infinite bankroll, as we would be able to compensate for any losses using outside money. In method 2, whenever we lose our next bet will be smaller, and it will be harder for us to make up for that loss. But whenever we win our next bet will be larger, and it will be easier for us to lose those winnings.

So after 422 games, the expected value is over +1,000,000 dollars. But we can not expect to have nearly that much. We can expect to have (0.7*0.8*0.9*1.1*1.2*1.5)422/6 dollars, which is about 86 cents, because that is the expected outcome. This is an important concept for any gambler to be aware of. You can never expect to gain your full expected value. The expected outcome is always smaller than the expected value, and the reason for this is your limited bankroll. If you are only looking to maximize expected value, you will bet your entire bankroll on every favorable game you come across (higher bet equals higher EV), and inevitably lose all your money at some point.

Finally, the big question, can you win money using this machine? We must disqualify method 1 because it assumes an infinite bankroll. We can't use method 2 because it expects to lose money. Let's say your bankroll is $1 to start with, and you want it to grow to $1,000,000. Can you do it? Can you make a poor man rich? The answer is yes. Rather than throwing in your whole bankroll each time, suppose you throw in a fraction of it. Let that fraction be k. We have 0<k<1. You have 1-k of your bankroll left over on the side. The possible returns are 0.7k, 0.8k, 0.9k, 1.1k, 1.2k, and 1.5k. Your new bankroll will be either 1-0.3k, 1-0.2k, 1-0.1k, 1+0.1k, 1+0.2k, or 1+0.5k. The expected outcome over six games is (1-0.3k)(1-0.2k)(1-0.1k)(1+0.1k)(1+0.2k)(1+0.5k), which has a local maximum of ~1.0492607 at k~0.491298 (courtesy of Wolfram|Alpha). A good approximation for the optimal strategy is to bet half your bankroll each game, for an expected bankroll growth factor of a little over 1.049 per six games. After no more than 1728 games you can truly expect to be a millionnaire.

Yes, there it is -- Expected Value vs. Expected Outcome.

I fancy myself as an avid gambler. And in practice I would choose the mix of the two methods, as Rainman has suggested here. (Playing infinite number of times is rather impractical.)

I have run an experiment playing 422 consecutive rolls 10 times with the following results:

4 times I had less than $0.01 left of my initial $1 bankroll.

3 times I had between $0.02 and $0.35 left.

3 times I won with the largest win of $83.62.

That left me very much ahead (more than $80, while putting at risk just $10).

The new question here is, what is the optimal string of consecutive rolls?

3 times I

It seems your post was cut off unfinished.

Expected outcome is the true value for any gambler who seeks to eliminate the "gambling" part. The probability is 1 that your own outcome approaches the expected outcome as the number of games approaches infinity. So in the long run, you are practically guaranteed to win if your expected outcome is positive. The same is not true for expected value.

Expected value is too influenced by the extremely high payoffs in the extremely unlikely variations. In this case, if you play for example 100 times, the extremely lucky variation where you hit 1.5 every time would yield a net payoff of roughly +$406,561,177,535,215,236. On the other hand, the extremely unlucky variation where you hit 0.7 every time would yield a net payoff of roughly -$1. The average net payoff, or expected value, would be (31/30)100-1, or roughly $27. So the variance (actual payoff - average payoff) is 406,561,177,535,215,209 for the luckiest variation and only -28 for the unluckiest variation. As follows, the expected value is extremely tilted by the high variance from impossibly lucky scenarios. You would need to be insanely lucky just to get anywhere close to the EV.

Your own experiments illustrate this perfectly. The average net payoff for running 422 consecutive games 10 times is 10*(31/30)422-1, or roughly $10,220,338. Your actual net payoff was just more than $80, falling way short of the EV. Had you kept your entire bankroll running for all 4220 games, you would have lost almost everything. This was not a case of bad luck, but rather quite expected.

Had you instead bet half your bankroll every time, with a starting bankroll of the same $10, for 4220 games, your expected outcome would have been ~10*1.049244220/6, or +$4,800,000,000,000,000. Feel free to simulate it, you will end up somewhere around that number. Your EV would of course be even higher, 10*(61/60)4220-1 or roughly +$2,000,000,000,000,000,000,000,000,000,000.

I am not disagreeing with the Expected Outcome concept. (And I have described the same in the post #12 under “PERSPECTIVE 1”, before I read your post #8. Actually, I should have read your post first. Now I invite everyone to read the spoiler in the post #12, where I derive the formula for the EV with a proof.)

However, in view of the question in the OP “Can you win in this game?” the Average/Expected Value seems very relevant for practical gambling.

I also like your system where you bet half of your entire bankroll on each turn. Although, I can't say I follow the 1.049N/6 formula for this method. I'd have to think about it.

Finding “optimal” system in terms of available time, initial bankroll, Expected Outcome, and Expected Value seems like more serious mathematical research.

Another interesting and, perhaps, simpler thing to solve would be: What are the chances of ending up ahead after N rolls when staking your entire bankroll?

(I could tell it's 17/36, or better than 47% for 2 consecutive rolls and 105/236, or better than 44% for 3 consecutive rolls.)

I played few more games against my computer and it owes me a lot of money.

After winning over $80 in 10 games of 422 consecutive rolls, where the Average Payoff, or EV is over $1,000,000; I played 10 settings of 1000 consecutive rolls each.

Four of those sets I lost shamefully -- $1 each.

The six wins, (discarding the cents, which I left as tips for the dealer,) were $7, $34, $112, $259, $544, and $546.

After that I put the accumulation of rounding error issue out of my mind and set on playing 10,000-roll sets. Again, I played just 10. In six sets I got my entire bankroll of $1 way bellow a penny. The four wins were: $20; $114; $4,490,314; and a whooping $916,248,000,000.

Am I just lucky, or am I that good, or is there a problem with my computer simulation? The 6 trailing zeroes in the last number are indicative of the rounding error.

Notably, it takes the computer less than a second to go through 10,000 rolls of a die and spit out my winnings. If I did the same by hand in a Casino, each set of 10,000 rolls would take 3 hours or so. And to play 10 sets, could take a week, or more. But it would be worth it in this case.

At any rate, who would like to play the part of the Casino in this game? Volunteers, please!

Link to comment
Share on other sites

  • 0

Would it help to rephrase the problem by saying "You've heard of double-or-nothing games. You have a chance to play a triple-or-nothing game with 50/50 odds, but you have to bet your entire holdings each time you play. Should you take it?" It's in the same spirit as the OP but the math becomes trivially simple and doesn't call for simulations.

If you start with $1 and play for N rounds, you have a 1/2N chance of winning $3N, and a (2N-1)/2N chance of losing your wager. The average outcome is 3N/2N so it's a clearly winning game from that perspective, the most likely outcome is you lose everything so it's a losing game from that perspective.

I'd say play the game a few times. It's not really different from, say, rolling a six sided die and getting $10 if you roll a 1 vs lose $1 if you roll anything else.

Link to comment
Share on other sites

  • 0

Upon further reflection, I am leaning back to my first post(#3) and the solution therein. The Expected Outcome and Geometric Mean have no bearing on the winning scheme you must adopt in this game. In particular, the Geometric Mean formula does not offer any tangible means of predicting an outcome of a gambling binge, whereas the Expected Value formula works just fine in practice.

To sum up:
The betting pattern of staking your entire remaining bankroll on each turn yields average (Expected Value) payoff of P = $A*(( 0.7+ 0.8 + 0.9 + 1.1 + 1.2 + 1.5)/6)N = $A*(1.0333...)N,
where $A is the starting amount, and N is the number of consecutive rolls of the die. For proof see post 12 inside the spoiler.
That is all that counts and you need not read the rest of what I wrote here.

We must keep in mind that N consecutive rolls of a die offer a total of 6Nvariations. E.g., 72 consecutive rolls yield 672 possible variations. That is a very large number, far beyond our computers' abilities. Therefore averaging out infinitesimally small fraction such as 100,000 samples of 72 dice rolls does not offer any significant statistical sample.
Even so, the theoretical average winning factor of 72 dice roll is approximately 10.6. After running several 100,000 72-dice rolls sets, I got some odd average payoffs ranging from 8.2 to 13.7, but most experiments produced values rather close to 10.6.

The argument in favor of geometric mean formula is that over a very large number of rolls, our six possible values would tend to be represented evenly. And the product (0.7*0.8*0.9*1.1*1.2*1.5) = 0.99792 being less than 1 would drag the resulting payoff down to zero. Therefore, it is reasoned that if you roll the dice many times, you surely will lose your bankroll.
Actually, the median string of N dice rolls where the 6 values are represented in equal numbers (N/6) is a minute fraction of all possibilities. As N becomes larger that fraction becomes smaller.
Say, we used this idea of median string to study an outcome of a gambling spree of 12 dice rolls. The geometric mean formula predicts 0.997922 = 0.995844. What practical use has that prediction? You still should expect an average win of 1.48 times your bet just as the EV formula predicts. And overall you will be winning (ending up with more than a $1) approximately 49.4% of the time.

Here is an interesting question for further research.
In this game what are your chances of finishing up with more than you started after N rolls?
For N = 1, we know the probability of ending up ahead is 1/2, or 50%; for N = 2, it is 17/36. Observe the set of values my computer has provided:
Number of Rolls...Chance ending up ahead
1............................0.5
2............................0.4722
3............................0.4861
4............................0.5069
5............................0.4949
6............................0.4892
7............................0.4929
8............................0.4947
9............................0.4961
10..........................0.4966
11..........................0.4945
12..........................0.4941

For 12 consecutive rolls, the computer took few minutes to consider before spitting the answer and I decided to stop asking. We can see from the table above, the percentage of winning variations does not go straight down, but fluctuates back and forth. Empirical data from my game simulations suggests that the probability of ending ahead after 1,000 consecutive rolls is somewhere between 0.46 and 0.49.
It would be interesting to find a formula. If not a formula, then answer to the question:
Considering the median string of die rolls payoff tends to zero at the infinity, does that mean that the ratio of wins to overall variations goes to zero too? Or does it have some other limit? (A proof, if you please.)

For all practical purposes, you cannot lose in this game (betting your entire remaining bankroll on each turn.)
The Casino denied my application to allow rolling 100 dice at a time.
Still, if you manage roll a single die 10 times per minute, it would take you less than two hours to complete a string of 1000 rolls. It would take you couple months to repeat that experiment 100 times when you most certainly would win millions (starting each set of 1000 rolls with a bankroll of just $1.) And for all practical reasons you don't even risk any money at all over that long run, since you can't lose more than $1 in each set, and about 47.5% of the time you come out ahead.
In this very practical everyday situation, what kind of useful information the “Expected Outcome” of $0.70 carries?

As we increase the number of consecutive rolls, we get less and less of a chance to see the expected average payoff. Yet, all the losses are packed very neatly between 0 and 1, while the wins are spread wide up to the amounts dwarfing the GNP of the entire Galaxy. (As already noted by Rainman.)

Once again, those who don't believe it's a winning game, can play the Casino side.

Edited by Prime
Link to comment
Share on other sites

  • 0

Upon further reflection, I am leaning back to my first post(#3) and the solution therein. The Expected Outcome and Geometric Mean have no bearing on the winning scheme you must adopt in this game. In particular, the Geometric Mean formula does not offer any tangible means of predicting an outcome of a gambling binge, whereas the Expected Value formula works just fine in practice.

To sum up:

The betting pattern of staking your entire remaining bankroll on each turn yields average (Expected Value) payoff of P = $A*(( 0.7+ 0.8 + 0.9 + 1.1 + 1.2 + 1.5)/6)N = $A*(1.0333...)N,

where $A is the starting amount, and N is the number of consecutive rolls of the die. For proof see post 12 inside the spoiler.

That is all that counts and you need not read the rest of what I wrote here.

We must keep in mind that N consecutive rolls of a die offer a total of 6Nvariations. E.g., 72 consecutive rolls yield 672 possible variations. That is a very large number, far beyond our computers' abilities. Therefore averaging out infinitesimally small fraction such as 100,000 samples of 72 dice rolls does not offer any significant statistical sample.

Even so, the theoretical average winning factor of 72 dice roll is approximately 10.6. After running several 100,000 72-dice rolls sets, I got some odd average payoffs ranging from 8.2 to 13.7, but most experiments produced values rather close to 10.6.

The argument in favor of geometric mean formula is that over a very large number of rolls, our six possible values would tend to be represented evenly. And the product (0.7*0.8*0.9*1.1*1.2*1.5) = 0.99792 being less than 1 would drag the resulting payoff down to zero. Therefore, it is reasoned that if you roll the dice many times, you surely will lose your bankroll.

Actually, the median string of N dice rolls where the 6 values are represented in equal numbers (N/6) is a minute fraction of all possibilities. As N becomes larger that fraction becomes smaller.

Say, we used this idea of median string to study an outcome of a gambling spree of 12 dice rolls. The geometric mean formula predicts 0.997922 = 0.995844. What practical use has that prediction? You still should expect an average win of 1.48 times your bet just as the EV formula predicts. And overall you will be winning (ending up with more than a $1) approximately 49.4% of the time.

Here is an interesting question for further research.

In this game what are your chances of finishing up with more than you started after N rolls?

For N = 1, we know the probability of ending up ahead is 1/2, or 50%; for N = 2, it is 17/36. Observe the set of values my computer has provided:

Number of Rolls...Chance ending up ahead

1............................0.5

2............................0.4722

3............................0.4861

4............................0.5069

5............................0.4949

6............................0.4892

7............................0.4929

8............................0.4947

9............................0.4961

10..........................0.4966

11..........................0.4945

12..........................0.4941

For 12 consecutive rolls, the computer took few minutes to consider before spitting the answer and I decided to stop asking. We can see from the table above, the percentage of winning variations does not go straight down, but fluctuates back and forth. Empirical data from my game simulations suggests that the probability of ending ahead after 1,000 consecutive rolls is somewhere between 0.46 and 0.49.

It would be interesting to find a formula. If not a formula, then answer to the question:

Considering the median string of die rolls payoff tends to zero at the infinity, does that mean that the ratio of wins to overall variations goes to zero too? Or does it have some other limit? (A proof, if you please.)

For all practical purposes, you cannot lose in this game (betting your entire remaining bankroll on each turn.)

The Casino denied my application to allow rolling 100 dice at a time.

Still, if you manage roll a single die 10 times per minute, it would take you less than two hours to complete a string of 1000 rolls. It would take you couple months to repeat that experiment 100 times when you most certainly would win millions (starting each set of 1000 rolls with a bankroll of just $1.) And for all practical reasons you don't even risk any money at all over that long run, since you can't lose more than $1 in each set, and about 47.5% of the time you come out ahead.

In this very practical everyday situation, what kind of useful information the “Expected Outcome” of $0.70 carries?

As we increase the number of consecutive rolls, we get less and less of a chance to see the expected average payoff. Yet, all the losses are packed very neatly between 0 and 1, while the wins are spread wide up to the amounts dwarfing the GNP of the entire Galaxy. (As already noted by Rainman.)

Once again, those who don't believe it's a winning game, can play the Casino side.

I played few more games against my computer and it owes me a lot of money.

After winning over $80 in 10 games of 422 consecutive rolls, where the Average Payoff, or EV is over $1,000,000; I played 10 settings of 1000 consecutive rolls each.

Four of those sets I lost shamefully -- $1 each.

The six wins, (discarding the cents, which I left as tips for the dealer,) were $7, $34, $112, $259, $544, and $546.

After that I put the accumulation of rounding error issue out of my mind and set on playing 10,000-roll sets. Again, I played just 10. In six sets I got my entire bankroll of $1 way bellow a penny. The four wins were: $20; $114; $4,490,314; and a whooping $916,248,000,000.

Am I just lucky, or am I that good, or is there a problem with my computer simulation? The 6 trailing zeroes in the last number are indicative of the rounding error.

Notably, it takes the computer less than a second to go through 10,000 rolls of a die and spit out my winnings. If I did the same by hand in a Casino, each set of 10,000 rolls would take 3 hours or so. And to play 10 sets, could take a week, or more. But it would be worth it in this case.

At any rate, who would like to play the part of the Casino in this game? Volunteers, please!

I agree that the EV is (31/30)N for N rolls. But the problem is, as N grows larger, that value will become distributed over a relatively smaller set of variations. For a large enough N, everybody in the world could play the game and everyone would lose. You asked if the ratio of wins to overall variations approaches 0 as N approaches infinity. It does, and I will prove it as you asked. The proof is at the end of this post.

As for your simulations with 10,000 rolls each: for some perspective, consider what the EV is for 10,000 rolls. (31/30)10000 ~ 3*10142. The expected outcome is about 3*10-2. Your largest result was about 9*1011, followed by 4*106, 1*102, 2*101, and six results of something times 10-something small. Now imagine if you were the expected value, sitting around 142 at the logarithmic scale, looking down at those ten simulations, knowing that the expected outcome is sitting around -2. You might think "hey, those ten results are all packed pretty neatly around the expected outcome, while I'm all alone up here". It should become clear that it's unreasonable to expect to get the expected value. If you ran 7 billion more simulations, one for each person in the world, I can't imagine that a single one of us could be expected to win nearly that much. 7 billion simulations is still a very small sample from the 610000 possible variations. Also, if you had combined those ten simulations into one 100000-roll game, you would have lost that game (based on your remark that the losses gave way below a penny in return).

The statement that you practically can't lose with method 2 is just wrong. In the long run, you practically can't win (proof still coming up). The only reason you can still win after 10000 rolls is that the expected outcome per roll is so close to 1. The expected outcome for six rolls is 0.99792, which means 0.997921/6 ~ 0.99965 per roll.

Theorem: the probability of winning (or even getting at least a fixed non-zero return) approaches 0 as the number of rolls approaches infinity.

Proof: There are six equally likely outcomes of one roll, let's denote them a1 through a6 (a1 being 0.7 and ascending to a6 being 1.5). Let fi denote the relative frequency of the outcome ai during a sequence of rolls. For example, if we roll 12 times and get 1*a1, 3*a2, 0*a3, 1*a4, 2*a5, and 5*a6, then f1 = 1/12, f2 = 3/12, f3 = 0/12, f4 = 1/12, f5 = 2/12, and f6 = 5/12. The exact number of outcomes ai is the relative frequency fi times N. The final outcome of our game will be G = 0.7f1N*0.8f2N*0.9f3N*1.1f4N*1.2f5N*1.5f6N = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N.

The expected value of each fi is 1/6. By the law of large numbers, P(0.1666<fi<0.1667, for all i) approaches 1 as N approaches infinity. But if 0.1666<fi<0.1667, then G = (0.7f1*0.8f2*0.9f3*1.1f4*1.2f5*1.5f6)N < (0.70.1666*0.80.1666*0.90.1666*1.10.1667*1.20.1667*1.50.1667)N < 0.9998N. So the probability approaches 1 that your outcome is smaller than a value which approaches 0 as N approaches infinity. Conversely, for any given E>0, the probability approaches 0 that your outcome is at least E.

Link to comment
Share on other sites

  • 0

So you're saying you wouldn't play a lottery with a 1/1,000,000 chance of winning $5,000,000 with a $1 ticket?

I'm saying that if $1 was my entire bankroll, I definitely wouldn't play that lottery. Sure, I could get lucky and win the big jackpot, and sure the EV is positive, but protecting your bankroll is just as important as finding favorable games. You can win in the short run playing unfavorable games as well, but in the long run you are going to lose. And if you don't protect your bankroll, you're going to lose in the long run.

If you're a recreational gambler with a steady job, and you spend a small part of your weekly salary on some weekend bets, hoping to cash in big so you can have a nice long holiday or early retirement, then bankroll management is not really an issue for you. Your bankroll is refilled on a regular basis, and you're not risking anything. However, I believe the appropriate definition of recreational gambling is that you're doing it for fun. If you're doing it to win money, if you're looking to make your living as a gambler, constantly looking for bets with a small edge to grind, knowing that if you do it right your profits will increase exponentially over time, until one day you're rich enough to retire, then you simply must know how to protect your bankroll. Otherwise you'll end up an addict, putting yourself in debt because you "know you can beat the game, you've just been unlucky so far".

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...