-
Posts
871 -
Joined
-
Last visited
-
Days Won
7
Content Type
Profiles
Forums
Events
Gallery
Blogs
Everything posted by Prime
-
The merchant breaking weight into 4 pieces problem by Bachet de Meziriac, published in 1624, shows how you can weigh the whole number of pounds between 1 and 40 with just ... So it looks like a high bid, but since no one offered better -- $60, buying 4 whole numbered weighs. (I didn't quite get what arbitrary numbered weighs are. Fractional?)
-
I thought we laid quite a bit of groundwork in previous like-topics.
-
And I just noted a mental hick-up I had. I said "as long as z < c (the convergence point). But that is always the case.
-
I couldn’t quite follow the logic of the derivative at the point of intersection of y=z^x with y=x. Why is the derivative a deciding factor for the convergence of those oscillating values? Anyway, here is how I envision the behavior of z^(z^(z^(… for 0 < z < 1. The spiraling lines plot the value for the exponent on each next step. Notice the first turn of the spiral when the z^z - z > z^z - z^(z^z)). Or z^(z^z)) > z^z. That should always be the case as long as z < c, where c is the convergence point, or the point of intersection of y=x with y=z^x. (z^c = c). If the spiral did not get narrower with each turn, y=z^x would have a local minimum somewhere in that interval and we know that’s not the case. Why would such spiral never rich the conversion point c? With every turn it appears to encounter a steeper rise of the y=z^x and should converge faster…
-
OK, since it is simple, I’ll go through the motions: For the second point the illustration is as following: Each vertical line is the difference between y = x and y = r^x. Corresponding horizontal line plots the next x-value for consecutive step (raising to a resulting power.) The lines form isosceles triangles. Thus the x-step is converging onto zero at the same rate as y-step and both coordinates will converge at the same point – the point of equilibrium. So the problem p(x) = c is solved and proven here for 1 <= c <= e. For any given x=a where 1 <= a <= e1/e, we don’t have a neat finite formula to find c, such that a = c1/c. But we can find an approximation using spreadsheet.
-
Fascinating research! I feel a graphical illustration for 1 <= x <= e1/e is in order here. So for the function rx, where rk < k, raising r into the resulting power will lower the result repeatedly. (With each step we’d be raising r to a smaller power). Where for rk = k, repeated operations will keep the result the same (k). Such power is found at the equilibrium point k where r=k1/k. Therefore, the substitution method of solution where we substitute p(x) for xp(x) does not solve p(x) = c equation, it solves xc = c instead. However, the solution for the latter happens to coincide with the solution for the former, where c happened to be in the convergence range for the p(x). To bring it to science for the interval (1, e1/e): 1. Do we have a proof that x1/x has its maximum at x=e? 2. What about proof that there may not be any other convergence points within the convergence interval? I mean other than the points of equilibrium (intersection of y=x with y=rx). It is possible that repeated steps of rk would lower the result by a smaller interval every time never reaching some point Ck above the lower equilibrium point. (Even though inspection does not support that.)
-
My complaint is that the method of solving p(x) = n by substituting p(x) with xp(x) yields incorrect results where n is outside function's convergence (existence) range. I'd be more content if the method yielded some unreal solutions or something of the sort. Like in the case of x2 = -1, for example. Here is what I think p(x) graph looks like. My other wild guess guess is that the portion of the graph between x=(0,1) is 90-degree rotation proportionally shrunk reflection of the graph for x=(1, e1/e)
-
If there was just one initial spin followed by infinite coin tosses, then all numbers would not be equally likely and "6" could occur more or less often than 1 in every 6 numbers, depending on where we started. Also note that 41/6 average steps was calculated for the scenario where upon reaching "6" we stop. In that sense the experiment was geared toward "6" and all numbers were not equal. The average number of steps to reach "6" in general, not just for the first time would differ from 41/6.
-
Same method solves p(x) = 100, yielding x = 1001/100. Yet it is quite clear that P(1001/100) does not converge to 100, but to some other much lower number. If p(21/2) indeed converges to 2, then it must be some special significance/quality of the square root with respect to p(x), which we are yet to discover.
-
You'd have to define what constitutes a trial. If it is one spin + 1 toss, then "6" comes up on average after 6 trials. If a trial is a toss of a coin, then "6" will not be produced 1 time out of 6, but will depend on where you started.
-
All I can see so far, is that we found that the expression x^(x^(x^... which we designate as p(x) converges for x = n1/n. However, we don't know what it converges to. Solving p(x) = n by substituting p(x) with xp(x) must be an invalid method. The fact that 21/2^(21/2^(21/2^(21/2^( ... tends to 2 is a coincedence. It may have a limit somewhere lower than 2. The function x1/x was proposed as a domain for x values for which p(x) converges. The only point on x1/x which is significant for our problem is its maximum, which we suspect at x=e where function value is e1/e. Assuming that is the maximum, if we found that there is x > e1/e for which p(x) converges, then the function x1/x loses its significance.
-
That is not how I split it. I split the infinite x^(x^(x... into two infinite x^(x^(x^... (x^(x^(x... Anyhow, we can see that the following solution is wrong. p(x) = n xp(x) = n xn = n x = n1/n We can tell that just by observation. The solution suggests that for p(x) = 100, x = 1001/100. Whereas we solve that for p(x) = 2, x = 21/2. On the other hand, 21/2 > 1001/100. Thus this solution suggests that if we start the infinite exponent string with a smaller number, with correspondingly smaller incremental steps we are going to arrive to a larger result than if we started from larger number with larger incremental steps. That makes no sense. Never mind that for 21/2 the p(x) seems to converge somewhere close to 2. That is coincedental. My conclusion: the solution by splitting the infinite string p(x) whichever way, whether into several infinite strings, or just one term and the infinite string -- does not work!
-
That result looks suspect to me, since 3.5 is exactly the average score between 1 and 6. Whereas the OP asks to find average number of steps to produce "6". I ran a simulation in spreadsheet -- several 1000-experiment runs. They all came up within +- 0.05 of 6.83 found theoretically by EventHorizon. The average between 10 separate trials came within 0.01 of the theoretical 6.83333 For a random coin toss simulation I used a random long binary number parsing it from right to left and regarding 0's as heads and 1's as tails.
-
I have already posted an inductive proof that for x inside that window (n1/n), p(x) converges. But the other side has not been proven here yet. That is how a number outside that window would not converge. It would be very interesting to come up with a proof, or at least some demonstration of that point. Here is my understanding of how logarithmic solution for p(x) = n yielding x = n1/n is wrong. 1. Given p(x) = n 2. As p(x) = x^(x^(x^(x^ ... then p(x) = x^p(x) = x^(x^(p(x)) , and so on. Furthermore, we can break an infinite string into two infinite strings. And thus p(x) = p(x)^p(x). 3. logxp(x) = logxn Substituting p(x) from (2): 4. logxxp(x) = logxn --> p(x)logxx = logxn --> p(x) = logxn --> x = n1/n. However, we could have substituted p(x) with p(x)^p(x), and then the equation would become: p(x)logxp(x) = logxn --> n*logxp(x) = logxn, which would yield a different solution for x. (As we have seen with the example of p(x)=4). Therefore, I think logarithmic solution is illegal. Yoruichi-san already mentioned that we don’t know the range for the p(x) -- what kind of solutions it yields, complex numbers, or what. Here is an interesting possibility: What if p(x) excludes rational values from its range? In other words, the true (minimal) upper limit for converging p(x) is always non-rational number?
-
I don't want to go about it ad infinitum. Let me just observe, the the statement of the problem (OP) basically requested to prove mathematically that the average number of trials to produce an event of probability p is 1/p. Youruichi-san has done exactly that. I also offered a "simpler" questionalble proof later on. I feel that Bonanova's "simpler proof" is a convincing demonstration, but comes short of a formal proof standard for the reasons I have explained in the previous posts. (I pointed out specifically where simpler proof uses presumed notion of average to derive the formula thereof.) Nothing that has been posted here since has moved me to reconsider my position.
-
Actually, on that one point I agree with Bonanova. Your problem here is more interesting. It is a variation on a random walk problem. I agree with EventHorizon's solution (above), which is economical and precise.
-
I think it is quite evident at this point that the solution x=n1/n for p(x) = n is wrong. Why -- is a different question. Your graph of the x1/x function shows maximum at e, after which the function diminishes and tends to 1. So if the maximum is at e and the maximum for p(x) is at x=e1/e where p(x) <= e, after which the values for p(x) must diminish. Therefore, p(x) = 5 should not have any solution. Really, if we start with a smaller value when constructing the infinite string p(x) we should not expect p(x) to be larger. At this point I am compelled to remind that x1/x as a domain for the values for which p(x) converges was a wild guess. We need to prove the top limit where p(x) still has a finite result (converges). Bonanova has asked for that already. We know that p(x) converges for some x. We proved that for x=n1/n, when p(x) < n (which is an off the mark upper limit), and we know that starting at some point p(x) runs off to infinity. For example, it is quite evidently the case for x = 2. Now we suspect that the maximum for p(x) is at x = e1/e and p(x) <= e at that point. Also note that for x < 1, p(x) behavior is interesting too. Basically, p(x) tends to 1 there, but it may have a local minimum somewhere between x=0 and 1.
-
If I understood the conditions right. Each turn consists of a random spin + one coin toss. Then ...
-
I'd venture to say for any x = n1/n, where n is a positive integer, the function x^(x^(x^... converges. The function x1/x has its maximum somewhere between x = 2 and x = 3. Perhaps, it is the ubiquitous e. The logarithmic solution/proof shown here before must be wrong, since it seems to give incorrect results for numbers other than 2. The proof that I offered in one of the previous posts just shows that for x=21/2 the expression is less than 2. But that is not necessarily the lowest limit. Just a wild guess: the function x^(x^(x^... is convergent for numbers up to the maximum of the function y=x1/x.
-
So the proof is based on a specific number of rolls -- 6 using expected value of the number of times for each individual number to appear. 1. Expected value is an average (mean) by its definition. It is calculated by adding probability for each individual occurence times the weight of that occurence. So if probability to roll "6" 1 time out of 6 rolls is p1, 2 times out of 6 -- p2 and so on. Then the expected value for the number of times "6" comes up is 1*p1 + 2*p2 + 3*p3 + 4*p4 + 5*p5 + 6*p6. Which should add up to 1 if we plugged actual values for the respective probabilities. 2. Instead of calculating the expected value from its definition, the proof asserts that there are total of 6 equal expected values and their sum should be equal to the number of rolls in the experiment -- 6. 3. Then relying on the equality of each expected value, the proof finds an individual one by dividing total number of rolls by the number of participants. 6/6 = 1. 4. Having thus found the average number of times for the "6" to come up over 6 rolls is one, the proof calculates the average number of rolls to produce an individual number is 6. What I see here is that this proof takes the notion of the average defined elsewhere, makes manipulations with it over a specific number of experiments and arrives to a value of that average (expected value) for the chosen number of experiments. I have most doubts with respect to step 3 above as a proof. Consider an example, where probabilities of separate events are not equal. Say there are total of 3 events with probabilities 1/2, 1/3, and 1/6. Then following the same line of reasoning, we'd have to assert that event one (with 1/2 probability) occured 3 times out of 6 because that's the proportion defined by its probability. But isn't that practically the same thing for which a proof was requested, and not an assertion? We can split hair indefinetely. Still I feel that the proof Y-s gave (same as I had in mind) does not leave much room for argument or doubt. Your proof, based on a fair division of space between individuals over a given interval, still seems to me more as a common sense assertion of what average is and not so much as a formal proof.
-
I never heard of such definition. If I understood you right, when we roll a die 6000 times and "6" did not come up exactly 1000 times, then we have proven that the probability to roll "6" is not exactly 1/6.
-
For the case where you rolled p*N sixes out of N rolls -- yes that's the average. On the other hand, why do you discard all those cases when you rolled N sixes out of N rolls, or N-1, N-2, ... sixes. Should we not account for those when establishing the formula for the average number of rolls? Is it because their probabilities are a bit smaller? Or is it because we have decided up front what the average is and thus need not consider other "non-average" cases?
-
2^(2^(2^(2^2))) = 2^65536 Huge number, it would take an entire notebook just to write it down. Raising exponent into some power is not the same as raising the result of exponentiation into some power.
-
But if the die is unfair, the average number of rolls to roll a numer is still 1/p. For example, if probability for "6" in a loaded die was 1/2 and for each other number 1/10, then it would take on average 2 rolls to roll "6" and 10 rolls to roll any other number. May be that example makes it more clear that your reasoning for proving the average 1/p, presumes beforehand that it is 1/p.