Jump to content
BrainDen.com - Brain Teasers

Quantum.Mechanic

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by Quantum.Mechanic

  1. Possibly not. One problem I've seen is that allowing intermediate numbers larger than the upper limit (e.g., 256) generates a lot of intermediate results of little use, which could be trimmed. And this is Perl, so the overhead for arrays and hashes is larger than you might expect. From http://perlmaven.com/how-much-memory-do-perl-variables-use, in bytes: size total_size SCALAR 24 24 ARRAY 64 64 HASH 120 120 Those are empty, unassigned variables. When the structure for each start number is filled out, it could be trimmed down to the useful bits and saved long term.
  2. Doh! Yes, of course. With path merging, there will be longer and shorter ways to get the same result. So I need to either run them all, and find the shortest; or generate in order by shortest new path length. The latter should be less space and less run time.
  3. OK, now I've added path merging, and fixed a bug where some paths didn't progress. (Thanks plasmid for checking my work.) Here's a run with 256, also printing out details for a worst start/goal pair of (195,170), and some other stats that might be interesting. Note that order is not preserved in the path listing. This can be done, but it takes more work, and is usually thrown away. Here's the code, much of it just for recording info for stats. Note that it has to dump the full list of paths after each start value. Otherwise, the memory blows up. I've also limited the upper end to twice the $last value. There may be a few rare results that find a shorter path via a large power, but I can't prove or disprove this. Anyone have a counter example?
  4. I see now that my algorithm doesn't merge paths. I'll go off and chew on that a bit, and see what I can do.
  5. I wrote a Perl program. These results took about 5 minutes: The Perl program is below. To avoid blowing up memory, it only keeps the paths long enough to compute the worst one for each start. However, it is easily tweaked to keep the longest path for each start value. There is also a speedup by limiting paths to less than the square of the highest allowed starting value. This makes a few paths a bit longer, but doesn't seem to affect the overall results much.
  6. Just some random analysis... Any choice of start and goal can be a winner. If we have the matrix of shortest path lengths from each start to each goal, is it not the case that we would use some game theory, and randomly choose our start and goal values based on this matrix? For instance, if choosing a starting value of 23 has a 1% chance of winning, and 24 has a 0.1% chance of winning, we should choose 23 ten times more often than 24. But we should still choose 24 sometimes, to keep the opponent honest. While it might be interesting to note what the initial matrix looks like (what are good starting numbers and good goals), it seems the best approach is to iteratively generate start and goal vectors.
  7. I appreciate the numerical theory in the answer, but I don't buy into that. A Monte Carlo run shows that 90 is the most likely value, though this decreases slowly. Picking 5 random values from a list, where N is the highest value in the original list, and 90 is the highest of the 5 observed values, and running for 1 million trials, I get a table that looks like this. (I've not listed all values to 190, the last N that had any hits.) N *** % 90 4.63% 91 9.01% 92 13.13% 93 17.02% 94 20.74% 95 24.21% 96 27.53% 97 30.72% 98 33.71% 99 36.58% 100 39.30% From this run, the chances are even that N is less than or more than 105. How to reconcile this result with the numerical analysis? Also, if the numerical analysis has 2 valid answers, then there must be a range of valid answers in that region. It seems to me that this problem doesn't have an answer, but a handwaving "something about here" result.
  8. Actually it's O(1). Your comment got me thinking, I couldn't say that it's in NP because there is no n in the problem, We have 130 as a starting point and we want to see if any of the binome(129,8) dice combinations have 120 different sum... Unless you are referring to the problem with "N-sided Dice", then yeah that's crazy exponential... If you assume a specific problem, with specific inputs, then yes, people get lazy and say O(1), because the problem itself is a constant. That's cheating. NP-complete requires a polynomial method to prove that a given solution is valid for a problem in NP. And these are generalized problems, so we never use O() notation for the specific ones (what would be the point?). To be NP-complete, verifying a solution to the "smallest max face value for distinct sums of (s=sides, d=dice)" would have to scale as some power of s and/or d. There seems to be no way to do this short of duplicating the search, and the search is exponential (perhaps (s**d)**k?)
  9. Still, lots of brute forcing, not much elegance.
  10. It seems that this puzzle is NP-hard and not NP-complete (if I understand this correctly), as there is not an efficient way to prove that the solution is minimal, without an exhaustive search for a better one. (Though a solution can be proved to be valid easily enough, just not minimal.)
  11. Brute force searches will take a long time. Is there any literature on the structure of solutions? I've looked at sum-free sets, triple-free sets, and prime number of measurement, to name a few, but none seem quite useful in this problem.
  12. Maximum sum, or highest numbered side?
  13. Please define "ideal". To me, "ideal" means the ball would bounce back to height h (assuming various things like the floor is the frame of non-rotating reference, perfectly elastic collision, etc.) Since you state the ball bounces back to h/2, something else must be going on. For instance, the ball at h could be going faster than terminal velocity in air a . Or the floor is not a perfectly elastic surface.
  14. Minimum, maximum, or expected number of riffle shuffles? I assume the minimum and expected number are of the greatest interest. As it is possible to preserve the starting order through a riffle shuffle as described (the trivial shuffle), there is no defined maximum. The minimum is also not so useful, as each shuffle can add entropy to the order from 0 to some maximum value per shuffle. As an estimate, it would be interesting to run a simulation to see how many shuffles, on average, are required to move the top card to the bottom of the deck, or the bottom card to the top. This is not directly related to the question, but might be one of the harder cases, and might therefore estimate the order of magnitude needed for the OP. Analytic analysis is beyond me at the moment, even if I had the time.
  15. Anza, can you just convert your procedure to base 23? Anything ending in 0 (base23) is divisible by 23. Since the expansion by 0's is mechanical, this should be easier than actually doing the math, and more accurate than the statistical argument. I'm also wondering if there's some number theoretic result that would help with determining this in the general case, for any base and any argument. -QM
  16. Nice one. Simple, mysterious, and obvious in hindsight.
  17. Given that not everyone on this board does not have the same high command of English...this makes more sense if the intent was "expected value" (a.k.a average).
  18. Questions using "between", giving an interval, are generally intepreted as using the closed interval, thus 0-100 means [0-100], including the endpoints. Sure, the language is imprecise, and much confusion has occurred in the past when the open interval is intended. This is a bigger problem for real (non-integer) ranges, at least in many people's minds -- it seems natural to include the endpoints when stating an integer range, less so when giving a non-integer range. Also reminds me of the language issue with things like "under-10" kids leagues, where the endpoint is not included (and it is a non-integer range).
×
×
  • Create New...