BrainDen.com - Brain Teasers
• 0

# Next card is Red

## Question

@CaptainEd shuffles a standard deck of playing cards and begins dealing the cards face up on the table. At any time @plainglazed can say Stop and bet \$1 that the next card will be red. If he does not interrupt, the bet will automatically be placed on the last card.

What is the best strategy? How much better than 50% can @plainglazed do?

## Recommended Posts

• 2
4 hours ago, bonanova said:

Pencils down. Discussion time over.

For the believers in the utility of clever play, choose a convenient number of cards (e.g. pick a small number and enumerate the cases) and quantify the advantage that can be gained.

For the nay-sayers, give a convincing argument that all the factors already discussed (permission granted to peek at spoilers) exactly balance each other out.

I remain convinced by my initial argument, but to expound a little:

Spoiler

At any point in time, the probability that the next card drawn will be red is equal to the probability that the final card will be red. This is because both cards are drawn from the same well-shuffled pile. Therefore, at any given juncture, it is never more (nor less) advantageous to call "Stop!" than it is to allow all of the cards to be dealt out.

##### Share on other sites

• 1

Or is it counterbalanced by the last card?

In any trajectory that has maintained a preponderance of reds chosen, it seems unlikely that it will have 50-50 for red at the last card. That’s vague, I realize.

##### Share on other sites

• 1

Spoiler

At any point in time, regardless of how many red cards and black cards remain undealt, the probability that the next card is red will always equal the probability that the last card is red. I don't think it matters when (or whether) "Stop!" is called.

##### Share on other sites

• 1
2 hours ago, Molly Mae said:

Hide contents

I agree with this statement.

I imagine it is implying that the last card is always at probability 50%, and that I'll disagree with.  Since we can evaluate the probability after a new card is revealed, we can update the probability with that new information.  If, for example, after 26 cards, we've only drawn black, we can say with certainty that the next card is red.  And we can say the same about the last card.  The next card and the last card will always have the same probability, but it won't always be 50%.

Spoiler

Precisely. There is no intended implication that the probability is 50% at all times, only that the probabilities are equal at all times. The probability is 50% on average, of course.

##### Share on other sites

• 1
8 hours ago, bonanova said:

Let there be n red cards and n black cards.

Hide contents

What is the smallest n that permits a winning strategy?

With a strategy of always betting when the count is high, it doesn't matter.  Foiled again by practical application?  I can see that for any variation of that strategy, I'll gain cases that win but I'll lose an equal number that used to win.

N=2 is 50%
RRBB - Lose
RBBR - Win
RBRB - Lose
BBRR - Lose
BRRB - Win
BRBR - Win

N=3 is 50%
RRRBBB - Lose
RRBRBB - Lose
RRBBBR - Win
RRBBRB - Lose
RBRRBB - Lose
RBRBRB - Lose
RBRBBR - Win
BRRRBB - Win
BRRBBR - Win
BRRBRB - Win
BBRRRB - Lose
BBRRBR - Lose
BBRBRR - Lose
BRBRRB - Win
BRBRBR - Win
BRBBRR - Win
RBBRRB - Win
RBBRBR - Win
RBBBRR - Lose
BBBRRR - Lose

Edited by Molly Mae
##### Share on other sites

• 0

If he's extremely lucky and the first 26 cards are black, then the best is 100%. Odds are greater than 50% any time there's more black than red cards on the table. Don't know if there's an ideal average number of black cards to have over the red cards though.

##### Share on other sites

• 0

Thanks Thalia...

Spoiler

For the case being extremely lucky, there is the case to be extremely unlucky - the first 26 cards are red.

You can say he can be very lucky - 25 of the 26 cards are red. Outweighted when there 25 of 26 are red.

...

Every situation where there remain more red cards in the deck has a counterpart with inverse situation.

If you try with a small number like 4, 6, 8... you see the correspondence of cards being have dealt and the possibilities of what can get out (permutations of the stack).

##### Share on other sites

• 0

better than 50%?

Spoiler

Half of the time there will be a point where there are more black cards than red cards flipped, in which case the odds the next flip is red is greater than 50%.  If that does not occur, the long term odds that the last card is red is still fifty/fifty?  May be faulty logic - conditional probability is a bear.

For example:  Odds of black, black are .5(25/51) = ~.2451 then the odds the next card flipped is red is (26/50) or .52.  the other 1-.2451 of the time, the odds the last card is red is .5? So with this scheme the combined odds of picking a red are (.2451)(.52)+(1-.2451)(.5)=.5049.

But if this logic stands, how to optimize?

##### Share on other sites

• 0

Last card red might not be a good backup plan

plainglazed will bet on red as soon as more blacks than reds have appeared. “If that has not occurred”, he’ll depend on last card.

For decks of size 4,6,8, the only trajectories that never have more blacks than red are 1/3, 1/4, and 1/5 of all trajectories. And they ALL end with a black card. I’m inclined to think there’s a simple argument that will generalize, but I’m not seeing it yet.

##### Share on other sites

• 0
3 hours ago, CaptainEd said:

Or is it counterbalanced by the last card?

yeah, think I agree with that statement, dealer.

##### Share on other sites

• 0

i think

Do not bet the first card

There is 50% chance, the first card is black. so bet all the other card , will win 1\$

If the first card iss red, still bet all the card follow untill have more red than black.

Totally, >50% chance win 1%

##### Share on other sites

• 0
4 hours ago, aiemdao said:

i think

Hide contents

Do not bet the first card

There is 50% chance, the first card is black. so bet all the other card , will win 1\$

If the first card iss red, still bet all the card follow untill have more red than black.

Totally, >50% chance win 1%

The way it's set up, he can say Stop just once.

##### Share on other sites

• 0

Interestingly, betting on the last card is always the same probability as betting on the second-to-last card.  I'll say that the best better odds are likely to wait until the count of red cards is higher than black.

This doesn't fall victim to the inverse probability problem.

Example:
1. BRR... should bet after the first card.
2. RBB... should bet after the third card.

Any time you don't bet a red will come up, you're betting that the odds of a black coming up are equal or better, which is only the case if more (or an equal number of) reds have already appeared.

Edited by Molly Mae
##### Share on other sites

• 0

Pencils down. Discussion time over.

For the believers in the utility of clever play, choose a convenient number of cards (e.g. pick a small number and enumerate the cases) and quantify the advantage that can be gained.

For the nay-sayers, give a convincing argument that all the factors already discussed (permission granted to peek at spoilers) exactly balance each other out.

##### Share on other sites

• 0
38 minutes ago, ThunderCloud said:

I remain convinced by my initial argument, but to expound a little:

Hide contents

At any point in time, the probability that the next card drawn will be red is equal to the probability that the final card will be red. This is because both cards are drawn from the same well-shuffled pile. Therefore, at any given juncture, it is never more (nor less) advantageous to call "Stop!" than it is to allow all of the cards to be dealt out.

I agree with this statement.

I imagine it is implying that the last card is always at probability 50%, and that I'll disagree with.  Since we can evaluate the probability after a new card is revealed, we can update the probability with that new information.  If, for example, after 26 cards, we've only drawn black, we can say with certainty that the next card is red.  And we can say the same about the last card.  The next card and the last card will always have the same probability, but it won't always be 50%.

##### Share on other sites

• 0

Further:

Counting cards in blackjack will get you kicked out of a casino.  The same principle applies here.  Using the count to your advantage in blackjack can actually give you an edge over the house, with optimal play.  That's one reason they use multiple decks in a casino.

##### Share on other sites

• 0

Let there be n red cards and n black cards.

Spoiler

What is the smallest n that permits a winning strategy?

##### Share on other sites

• 0
On 2/27/2018 at 2:46 AM, ThunderCloud said:

Therefore, at any given juncture, it is never more (nor less) advantageous to call "Stop!" than it is to allow all of the cards to be dealt out.

True so far. But it does not imply that it is not advantageous to i.e. wait for the 3 next cards and call "Stop!" After the next 3 cards, the p(next card is R)=p(the last card is R) still holds - but with (most probably) a different p.

Edited by harey
##### Share on other sites

• 0
6 hours ago, harey said:

True so far. But it does not imply that it is not advantageous to i.e. wait for the 3 next cards and call "Stop!" After the next 3 cards, the p(next card is R)=p(the last card is R) still holds - but with (most probably) a different p.

Spoiler

As you say, 3 cards later, p(next card is red) = p(final card is red). So there is still no advantage to calling "Stop!" versus continuing to let the cards be dealt at that point, even though your probability of winning will have certainly changed from 3 cards ago. Because the probabilities are always equal, it is not statistically different than simply betting on the final card being red — that probability changes as cards are dealt out, but you cannot influence it.

Edited by ThunderCloud
##### Share on other sites

• 0
1 hour ago, ThunderCloud said:
Hide contents

As you say, 3 cards later, p(next card is red) = p(final card is red). So there is still no advantage to calling "Stop!" versus continuing to let the cards be dealt at that point, even though your probability of winning will have certainly changed from 3 cards ago. It is no different, statistically, than simply betting on the final card being red — that probability changes as cards are dealt out, but you cannot influence it.

Spoiler

So there is still no advantage - well, that's what I ask you to prove. It does not result from p1(R is next) = p1(last one is R) and p2(R is next) = p2(last one is R).

It is no different, statistically, - the same. I can believe you or not.

In the https://en.wikipedia.org/wiki/Secretary_problem the p(next candidate is the best) = p(last candidate is the best) also is true all the time. Still, you do not choose the first candidate.

I have about the same problem. If X has a strategy to predict R, then Y employing the mirror strategy can predict B. At any point, their chances sum 1. Just it is insufficient.

Edited by harey
##### Share on other sites

• 0
15 minutes ago, harey said:
Reveal hidden contents

So there is still no advantage - well, that's what I ask you to prove. It does not result from p1(R is next) = p1(last one is R) and p2(R is next) = p2(last one is R).

It is no different, statistically, - the same. I can believe you or not.

In the https://en.wikipedia.org/wiki/Secretary_problem the p(next candidate is the best) = p(last candidate is the best) also is true all the time. Still, you do not choose the first candidate.

I have about the same problem. If X has a strategy to predict R, then Y employing the mirror strategy can predict B. At any point, their chances sum 1. Just it is insufficient.

Maybe this will make it clearer:

Spoiler

At any point in time, p(next card is red) = p(final card is red). So, if you stop the dealer at some point, your chances of winning by betting that the next card is red are the same as your chances if you bet that the final card is red. The optimum strategy for when to bet that the next card is red is therefore the same as the optimum strategy for when to peek at the final card and bet that it is red.  Even though the probability that the final card is red changes as cards are dealt out, choosing when to peek at it will not affect the outcome.

##### Share on other sites

• 0

Evident after translating it to college version:

Spoiler

From a shuffled stack of 26 red and 26 black cards, R red and B black cards were drown. Prove that after drawing additional k cards (k=0...51-R-B), p(the card on the top of the stack is red) does not change.

Proof by induction. Lets call r the number of remaining red cards and b the number of remaining black cards.

a) p(red   is on top) = r/(r+b); p(then red is on top)=(r-1)/(r+b-1)
b) p(black is on top) = b/(r+b); p(then red is on top)=(r  )/(r+b-1)

a) becomes: p=r/(r+b) * (r-1)/(r+b-1)  = r*(r-1)/denom
b) becomes: p=b/(r+b) * (r)/  (r+b-1)] = b* r   /denom

In both cases, denom=(r+b) * (r+b-1)

Summing up the numerators:
r*(r-1) + b*r = r*r-r  + b*r
= r*(r-1 + b)

Conveniently, we can simplify (r-1+b) in the numerator against (r+b-1) in the denominator, leaving r/(r+b). cqfd

Just a little bit late...

I guess in a test, half an hour would be allocated for this question. I get depressed when I realize the time I needed.

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.