
Content Count
143 
Joined

Last visited

Days Won
4
Content Type
Profiles
Forums
Calendar
Gallery
Blogs
Posts posted by ThunderCloud


On 5/3/2018 at 9:22 PM, plasmid said:
After rereading, I probably misunderstood what plainglazed was saying. I interpreted it as being equivalent to setting things up so the buddy would list all the coins that are headsup, write out their position in binary, and then take the XOR of all of those binary numbers to see which coin to take. If that were the case, then handling the scenario H H H T H, you would see heads up on coins
1: 001
2: 010
3: 011
5: 101So the XOR of those would be 101, or 5. You want the XOR to be 1, or 001. So flip coin 4. You would have all of the coins headsup, the XOR of 001 through 101 would be 001, and your buddy would know to take the first coin. But now I think plainglazed wasn't quite saying that.
Wow… I love it! This is one of the best puzzles I've ever seen, thank you for sharing it, and for clarifying the solution. ^.^

On 4/19/2018 at 8:36 AM, plainglazed said:
an attempt at an explanation by example
Let's say there are twenty four coins. Label 0 for tails and 1 for heads. Thus a string of twenty four ones and zeros describes
the coins in play. Let's also say the tenth coin is the money coin. So something like this:
H H T H T T T H T T H H H H T H T H T T H T H T
1 1 O 1 O O O 1 O O 1 1 1 1 O 1 0 1 0 0 1 0 1 0
Listing each coin position in binary and corresponding coin value:
1  1(heads)
10  1
11  0
100  1
101  0
110  0
111  0
1000  1
1001  0
1010  0
1011  1
1100  1
1101  1
1110  1
1111  0
10000  1
10001  0
10010  1
10011  0
10100  0
10101  1
10110  0
10111  1
11000  0
We use the bit positions that are powers of two as parity checks to our string of coins.Define the parity at bit position 1 as the sum mod 2 of all bits that are in positions where the least significant digit of that
position in binary is a one (the ones column if you will). Thus the sum mod 2 of the bits in positions 1,3,5,7,9,etc. In our
case that is (1+0+0+0+0+1+1+0+0+0+1+1)mod2=1
Define the parity at bit position 2 as the sum mod 2 of all bits that are in positions where the second least significant digit
of that posistion in binary is a one (the twos column). Thus the sum mod 2 of the bits in positions 2,3,6,7,10,11,etc. In our
case that is (1+0+0+0+0+1+1+0+0+1+1+0)mod2=1
Define the parity at bit position 4 as the sum mod 2 of all bits that are in positions where the third least significant digit of
that position in binary is a one (the fours column). Thus the sum mod 2 of the bits in positions 47,1215,2023,etc. In our
case that is (1+0+0+0+1+1+1+0+0+1+0+1)mod2=0
The parity at position 8 is the sum mod 2 of the bits in positions 8–15,24–31,40–47,etc. In our case that is
(1+0+0+1+1+1+1+0+0)mod2=1
The parity at position 16 is the sum mod 2 of the bits in positions 16–31,48–63,80–95,etc. In our case that is
(1+0+1+0+0+1+0+1+0)mod2=0
etc.
The values (heads or tails) at bit positions that are powers of two (positions 1,2,4,8,16) are 1,1,1,1,1And we've calculated the parity at bit positions that are powers of two to be 1,1,0,1,0
The bits at positions 4 and 16 of the calculated parity do not match the actual bit value of our coins in those positions. But
if we flip the coin at position 10100 and recalculate the parity, it matches the value of our coins.
Now if we flip the money coin, our partner can calculate the parity at positions 1,2,4,8,16 and see that bit values do not match
at positions 2 and 8 and know the money coin is at position ten.
If the total number of coins is a power of two, the steps of making the calculated parity match the actual bit values and then
flipping the money coin can be combined into one flip.
For a better explanation wiki Hamming CodeI am not sure if this works for every case, or if I might be misunderstanding the scheme. For a simple counterexample…
SpoilerConsider a case with 5 coins. Suppose their initial orientation is H H H T H, and that the first coin is the special one. Let H = 1 and T = 0, and number the coins from 1 to 5.
Then we should have three "parity" coins: coin 1 is the sum (mod 2) of coins 1, 3, and 5; coin 2 is the sum of coins 2 and 3; coin 4 is the sum of coins 4 and 5. In this example, both coin 2 and 4 have "wrong" parity, and because each sums parity over a completely disjoint set of coins, it will take 2 coin flips to correct the parity — namely coins 3 and 5. But then, if coins 3 and 5 are Tails, all three of the parity coins (1, 2, 4) will be correct regardless of which way they face. How to specify that coin #1 is worth gajillions?

Spoiler
Let's label the sets #0 through #9. Place 0 coins from set #0, 1 coin from set #1, 2 coins from set #2… etc… through 9 coins from set #9 on the scale, all at once. The hundredths digit on the scale display tells you which of the ten sets is counterfeit.

3 hours ago, BMAD said:
Alas, English.... But I am confused. From your interviewing you know for a fact that this family has A boy. So regardless of their birth order all we don't know is the probability that the other child is a boy. And the other child could be a boy or girl, so I still believe it is 1/2.
The subtlety is…
Spoiler…that you can only say that "the other child" has a ½ chance of being a boy when you know that one child specifically is a boy. For example, if I tell you that I have two children and my eldest child is a boy, then you can say that the probability that both of my children are boys is ½. However, if I merely tell you that I have two children and at least one is a boy, but do not in any way distinguish which child is definitely a boy, then all I have told you is that my children are not both girls. Three equally likely possibilities remain: the eldest might be the only boy, the youngest might be the only boy, or both might be boys.
In @bonanova's puzzle, the children are distinguished from one another if only the boy was born on a Tuesday (I do not know if that child is the eldest or not, but I can speak of "the child born on Tuesday" separately from "the other child"). If both children were born on Tuesday, then I cannot say "this one is a boy and the other one could be a boy or girl," because I have no means of distinguishing one child from the other.
 1

5 hours ago, bonanova said:
@ThunderCloudThat's close, but a bit low.
Hmmm… aha!
SpoilerLet's temporarily label the children firstborn and secondborn. For any parent with two children, there are 49 possibilities for what days of the week their firstborn and secondborn child had a birthday. In 13 of them, at least one child is born on Tuesday; both children are born on Tuesday in only 1 case.
So! Given that at least one child was born on Tuesday, the probability that both are born on Tuesday is then 1/13. If both are born on Tuesday, then we know they aren't both girls, so the probability that both are boys is ⅓. If one of them wasn't born on Tuesday, then that child has a ½ probability of being male.
Overall, then, the probability is (1/13)*(1/3) + (12/13)*(1/2) = 19/39 that both children are boys.
That should do it. ^^

Spoiler
Having two children, there are four equally likely combinations of sexes.
If both children were born on Tuesday, then all we know for certain is that they are not both girls, which eliminates just one of the four cases. So then the probability that both are boys is ⅓.
If not, then we know the child who was born on Tuesday is a boy, which eliminates two of the four cases, and so the probability that the other child is also a boy is ½.
Since the probability of both children being born on Tuesday (given that at least one of them is) is 1/7, the overall probability that both children are boys is
(1/7)*(1/3) + (6/7)*(1/2) = 10/21.

2 hours ago, flamebirde said:
The trick answer's 9 months. Because if 1 dollar is 12 months, and 2 dollars is six months, then 1.50 is in the middle of those two, so surely the time also has to be in the middle of the time span. The number in the middle of 12 and 6 is 9, so 9 months.
However, what's important to note is that this problem isn't linear but exponential. That is, the change in time spent saving is much greater when we go from $1 to $2 than it is when we go from $6 to $7. So originally from $1 to $2, we double our initial savings amount, so we halve the time it takes. From $1 to $1.50, though, we increase our savings amount by 50%, so the total time it'll take is 12/(1.5)=8 months.
Of course, the simple way to do it is just to divide 12 dollars by 1.5 dollars a month, but the above answer gives a little more reasoning to it.
Clever. ^^; Nice explanation!

Hmmm, I'll go for the seemingly obvious…
Spoiler8 ?

14 hours ago, BMAD said:
n is not known
Well, in that case…
SpoilerBob can determine a_{0} by asking for P(0). After that, Bob can check for additional terms by guessing another positive integer, say, 1; if P(1) is different than P(0), Bob solves for a_{1} assuming it is the only remaining term. Bob then tests a new value, maybe P(2)… if it is different than what would be predicted by the polynomial thus far, Bob then resolves for a_{1} and a_{2} assuming there are no further remaining terms. Then tests another value… etc. Bob stops when testing confirms his assumption that there are no further terms — because each of the nonzero coefficients is positive, a single unique positive test value could determine whether any additional nonzero terms existed or not.
So, all in all, I think that Bob would need n + 2 test values to determine for certain the coefficients of Alice's ndegree polynomial.

A more detailed answer…
SpoilerIf the question is to be interpreted to suggest that one of (1), (2), (3), or (4) is the correct answer, and that one of those four answers is chosen randomly such that each has a 25% probability of selection, then it can be shown that the question has no answer:
Among the answer choices given, there are three distinct probabilities: 50%, 25%, and 0%. If the correct answer is 50%, then since this is only one out of four answer choices, the probability must in fact be 25%. That is, by assuming the correct answer is 50%, we obtain that it is not 50%. Similarly, if we assume the correct answer is 25%, then since two out of four answer choices are correct, the real answer must be 50%… i.e., not 25%, a contradiction once again. If we assume that the answer is 0%, then answer (3) is correct and the probability of choosing it is 25%, not 0%, once again a contradiction. Finally, if we assume that the probability is anything else not listed as an answer choice, then the probability of choosing it is 0% (and therefore, 25%, etc…), meaning it is, in fact, listed among the answer choices. Once again, a contradiction. For any 0% ≤ p ≤ 100%, if we assume the correct answer is p, then we must conclude that it is not p. There cannot be a consistent answer.
On the other hand, if the question is interpreted to suggest that it is answered as an entirely random probability (any real number between 0% and 100%, without regard to the listed answer choices), then the probability of choosing the precisely correct answer is 0%, as nonzero probabilities can only occur over an interval (e.g., probability of choosing the correct answer ± 1%).

42 minutes ago, ThunderCloud said:
Then again...
SpoilerIt also seems contradictory to claim that the probability is 0% and yet answer choice #3 is not correct. So, perhaps the only consistent assessment is that the probability of guessing a correct answer is undefined. The question is, after all, a paradox.

This is a fun one...
SpoilerI'd say the probability of randomly choosing the correct answer is 0%. If any of answers (1, 2, 3, and/or 4) is considered correct, it results in a contradiction. So, none of the answers may be considered correct (including option 3).

I'm coming up with…
Spoiler35 locks, with each general receiving a set of 20 keys.
It must be that any group of 3 generals is short by (at least) one type of key, and furthermore that all four of the generals not in this trio must possess a copy of that missing key. There are 35 distinct ways to choose a trio from among seven generals, and for each of those combinations, four copies of a unique key must exist to be held by each of the remaining four generals. That means there must be 140 keys altogether, four each of 35 distinct patterns, to be distributed amongst the seven generals. Each general receives one unique key for every unique grouping of 3 out of the other 6 generals.

Is it…
SpoilerA calendar?

I think…
SpoilerNo. 2008 is an even number (it includes a factor of 2), while each of (10^{n}  1) / 9 is odd by construction (does not include a factor of 2).

Hmmm… assuming n is known to Bob, my first thoughts…
SpoilerNaively, as there are n+1 unknown terms to be found, there would not need to be more than n+1 distinct choices of k to yield a solvable set of linear equations. In particular, P(0) = a_0, P(1) is the sum of the coefficients, P(1) gives the sum of the evenindexed coefficients minus the sum of the oddindexed coefficients, etc.
I haven't thought of a way to leverage the fact that each coefficient is an integer ≥ 0 to further reduce the number of P(k)'s Bob would need to fewer than n+1, but there could be one…

I think…
SpoilerEach wine tester could take a small sip from a different half of the thousand bottles, in such a way that the poisoned bottle is uniquely specified by which of the ten testers are found dead the following morning.
For example, consider the bottles to be indexed from 0 to 999. This range would require ten binary digits (bits) to express. You could have Tester #1 sip from every bottle whose lowest bit is a "1", Tester #2 sip from every bottle whose secondlowest bit is a "1", and so on.

8 hours ago, Donald Cartmill said:
I'd play TicTacToe on a magic square:
2 7 6
9 5 1
4 3 8
If I get to go first, I'll start with the "middle square" and choose 5. Next I'll choose a corner  2, 4, 6, or 8  depending on what you pick. If you chose one of the edges, I choose a corner adjacent to it; if you chose a corner, I choose a neighboring corner. Presumably, your next choice will be forced so as to prevent me from scoring 15 on a diagonal, after which I can guarantee a win for myself by choosing another corner number, thus leaving me with two ways to win on my next turn. Your solution is invalid. Your opponent chooses a 4; you choose a 2 or an 8; opponent chooses either 2 or 8; You are now forced ,you cannot take the remaining corner or your opponent score a 15 across the bottom ;if your 2nd was a 2; Therefore you must go a 3 to block ; he then must go 7 to block ....No winner
Indeed, if both players play the ideal strategy for TicTacToe, the game always results in a draw. It is nonetheless the ideal strategy.

In essence...
SpoilerI'd play TicTacToe on a magic square:
2 7 6
9 5 1
4 3 8
If I get to go first, I'll start with the "middle square" and choose 5. Next I'll choose a corner  2, 4, 6, or 8  depending on what you pick. If you chose one of the edges, I choose a corner adjacent to it; if you chose a corner, I choose a neighboring corner. Presumably, your next choice will be forced so as to prevent me from scoring 15 on a diagonal, after which I can guarantee a win for myself by choosing another corner number, thus leaving me with two ways to win on my next turn.

15 minutes ago, harey said:
So there is still no advantage  well, that's what I ask you to prove. It does not result from p1(R is next) = p1(last one is R) and p2(R is next) = p2(last one is R).
It is no different, statistically,  the same. I can believe you or not.
In the https://en.wikipedia.org/wiki/Secretary_problem the p(next candidate is the best) = p(last candidate is the best) also is true all the time. Still, you do not choose the first candidate.
I have about the same problem. If X has a strategy to predict R, then Y employing the mirror strategy can predict B. At any point, their chances sum 1. Just it is insufficient.
Maybe this will make it clearer:
SpoilerAt any point in time, p(next card is red) = p(final card is red). So, if you stop the dealer at some point, your chances of winning by betting that the next card is red are the same as your chances if you bet that the final card is red. The optimum strategy for when to bet that the next card is red is therefore the same as the optimum strategy for when to peek at the final card and bet that it is red. Even though the probability that the final card is red changes as cards are dealt out, choosing when to peek at it will not affect the outcome.

6 hours ago, harey said:
True so far. But it does not imply that it is not advantageous to i.e. wait for the 3 next cards and call "Stop!" After the next 3 cards, the p(next card is R)=p(the last card is R) still holds  but with (most probably) a different p.
SpoilerAs you say, 3 cards later, p(next card is red) = p(final card is red). So there is still no advantage to calling "Stop!" versus continuing to let the cards be dealt at that point, even though your probability of winning will have certainly changed from 3 cards ago. Because the probabilities are always equal, it is not statistically different than simply betting on the final card being red — that probability changes as cards are dealt out, but you cannot influence it.

2 hours ago, Molly Mae said:
I agree with this statement.
I imagine it is implying that the last card is always at probability 50%, and that I'll disagree with. Since we can evaluate the probability after a new card is revealed, we can update the probability with that new information. If, for example, after 26 cards, we've only drawn black, we can say with certainty that the next card is red. And we can say the same about the last card. The next card and the last card will always have the same probability, but it won't always be 50%.
SpoilerPrecisely. There is no intended implication that the probability is 50% at all times, only that the probabilities are equal at all times. The probability is 50% on average, of course.

4 hours ago, bonanova said:
Pencils down. Discussion time over.
For the believers in the utility of clever play, choose a convenient number of cards (e.g. pick a small number and enumerate the cases) and quantify the advantage that can be gained.
For the naysayers, give a convincing argument that all the factors already discussed (permission granted to peek at spoilers) exactly balance each other out.
I remain convinced by my initial argument, but to expound a little:
SpoilerAt any point in time, the probability that the next card drawn will be red is equal to the probability that the final card will be red. This is because both cards are drawn from the same wellshuffled pile. Therefore, at any given juncture, it is never more (nor less) advantageous to call "Stop!" than it is to allow all of the cards to be dealt out.

Spoiler
At any point in time, regardless of how many red cards and black cards remain undealt, the probability that the next card is red will always equal the probability that the last card is red. I don't think it matters when (or whether) "Stop!" is called.
Fair deal?
in New Logic/Math Puzzles
Posted
Assuming everyone ate the same number of rolls…
Each person ate 2⅔ rolls (or 8/3 of a roll) presumably, so Alpha is fairly owed 2⅓ or 7/3 rolls, while Bravo is fairly owed just ⅓ roll. Since Charlie valued his 8/3 roll at 8 coins, 1 coin per ⅓ roll, 7 of them should go to Alpha and 1 to Bravo.