Jump to content
BrainDen.com - Brain Teasers

gavinksong

Members
  • Posts

    457
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by gavinksong

  1. I posted this in the original thread, but I think it belongs here now. Well, for a triangle, if we add up the white space, it is just the infinite sum of a geometric series with a factor of 3/4 (the big empty triangle in the middle is one-fourth of the total area, the three smaller triangles are one-fourth of the remaining area, and so forth). Strangely, it converges at the total area of the triangle. It we calculate the black space, it is three-fourths raised to the infinite power, which is zero. So apparently, the entire triangle is supposed to be prohibited space. I think here bonanova's musings becomes relevant: Instead of looking backwards, as we did to solve the problem initially, we should look forwards from the initial point. This is all intuition, but you will notice, that if the initial point is in the big empty triangle in the middle, then in the next step, it will always move to one of the three smaller white triangles. And from there, it will always move into one of its three smaller "shadows" (I think that is an appropriate term, and hopefully you understand what I mean), and so on. The important point here is that we always move into smaller and smaller "shadows". When the initial point is thrown into the triangle at random, the probability that it lies into a white triangle is 100%. However, as the simulation progresses, the points are eventually forced into smaller and smaller white triangles. In other words, the amount of "prohibited space" is not actually an absolute value, but simply increases with step number (in a sense). As in, there is no prohibited space when you throw in the first point, but the second point cannot be in the large white triangle, the third point cannot be in the large white triangle or any of its shadows, the fourth cannot be in the large white triangle or its shadows or any of its shadows' shadows, and so on. So to us, it seems as though there is some sort of absolute law because we see the larger empty white triangles, but in reality, it is only because we do not see the infinitesimally small white triangles with the dots in them. The entire process is just endless error-correction, moving toward an impossible figure, trailing behind an interesting asymptotic pattern in the process.
  2. Well, for a triangle, if we add up the white space, it is just the infinite sum of a geometric series with a factor of 3/4 (the big empty triangle in the middle is one-fourth of the total area, the three smaller triangles are one-fourth of the remaining area, and so forth). Strangely, it converges at the total area of the triangle. It we calculate the black space, it is three-fourths raised to the infinite power, which is zero. So apparently, the entire triangle is supposed to be prohibited space. I think here bonanova's musings becomes relevant: Instead of looking backwards, as we did to solve the problem initially, we should look forwards from the initial point. This is all intuition, but you will notice, that if the initial point is in the big empty triangle in the middle, then in the next step, it will always move to one of the three smaller white triangles. And from there, it will always move into one of its three smaller "shadows" (I think that is an appropriate term, and hopefully you understand what I mean), and so on. The important point here is that we always move into smaller and smaller "shadows". When the initial point is thrown into the triangle at random, the probability that it lies into a white triangle is 100%. However, as the simulation progresses, the points are eventually forced into smaller and smaller white triangles. In other words, the amount of "prohibited space" is not actually an absolute value, but simply increases with step number (in a sense). As in, there is no prohibited space when you throw in the first point, but the second point cannot be in the large white triangle, the third point cannot be in the large white triangle or any of its shadows, the fourth cannot be in the large white triangle or its shadows or any of its shadows' shadows, and so on. So to us, it seems as though there is some sort of absolute law because we see the larger empty white triangles, but in reality, it is only because we do not see the infinitesimally small white triangles with the dots in them. The entire process is just endless error-correction, moving toward an impossible figure, trailing behind an interesting asymptotic pattern in the process.
  3. Wow. I made a really stupid mistake. That's embarrassing.
  4. for B, of pairs of whole numbers, what is the smallest m+n where (m,n) and (p,q) have the same lcd and gcf? Oh, I see. So DeGe pretty much had it.
  5. Isn't the strategy for calculate the average always the same (add up all the values and then divide)?
  6. Yup. This is a pretty clear way of seeing how, no matter what algorithm you follow, there is a minimum number of moves, or "distance", between any of the vertices in the "worst case" set and either of the "ordered" states. Unfortunately, I could only make a complete analysis of the graph for N=4 - but I am hoping that others could expand on this.
  7. Some interesting observations from an different viewpoint... You can construct a graph where the vertices represent the different possible orderings and the edges represent moves. Two vertices are connected by an edge if you can travel between them by executing a single move. For example, the graph for N=3 would look like an octahedron. ABC would be connected by an edge to CAB, ACB, BAC, and BCA - but not CBA since it takes two moves to go from ABC to CBA. It is easy to see here that the greatest "distance" from any vertex of the octahedron to ABC or CBA (which are always on opposite sides of the graph) is 1. We can observe a couple of things about these graphs. We know that the number of vertices is always N! and that the number of edges radiating from any vertex is always (N-1)2. We can also guess that reverse orderings are always on "opposite" sides and that the minimum number of moves it takes to reverse an ordering is always N-1. Let's look at N=4, starting at ABCD. We know that ABCD is connected to (4-1)2 = 9 other vertices. We also know the same is true of its reverse, and we know that there is no overlapping because there should be at least 4-1 = 3 edges between reverses. This accounts for 20 out of 4! = 24 total vertices. Interestingly, this means that the remaining four vertices must consist the "worst case" set. These are BADC, BDAC, CADB, and CDAB (which are apparently connected in the shape of a square). Now for N=5. ABCDE is connected to (5-1)2 = 16. If we count up these vertices and their mirrors, we find that we have accounted for 34 out of the 5! = 120 total vertices. If we assume that the worst possible scenario in N=5 forces us to make at least 2 moves, we must conclude that the remaining 86 vertices must make up the "worst case" set, which is a surprisingly large number. An interesting note about the "worst case" set is that each member's reverse must also a member of the set.
  8. I like these self-refential number questions. They're interesting. Since I'm guessing that there are only a finite number of self-referential numbers, another good question would be "what is the highest?" In fact, an even better question would be "what are all of the self-referential numbers, and is there a pattern?" BMAD, can I look forward to a new thread being opened up for these discussions?
  9. I realized that I made a mistake. There is one more mating that is possible since the old generation is still alive. Which means I got the same result as araver even though I paired the individuals up differently. So this must be the solution?
  10. I assumed the greedy approach and got a similar result (but not quite the same) as araver after running one simulation. I used different matings however. Perhaps the result is different depending on how you pair up the individuals?
  11. I should also mention that quicksort often relies on choosing a random pivot, so the number of moves it makes is in fact random (although its average runtime is statistically fast), which is another reason why it would be difficult to count the number of steps.
  12. That would be my guess. Although for number 3, we would have to clarify, since we are not sure, that it is not the (unique) worst possible disarray but one of the worst possible disarrays (as in, the set of worst possible disarrays contains several arrangements since one other example would simply be the reverse, and there may be more). Perhaps one requirement for WPDs are that they are symmetrical? And my guess for number 2 would be that we only round down until n=6 or n=7 (this is rather arbitrary since we don't have to round for n=6 since it is even), but we have no proof.
  13. Pickett, I apologize for the hasty wording in my previous post. Of course, a computer program takes a measurable number of "steps" - once you define the term "step." You could count at the binary level, you could count the number of times you read or write a value, you could count the number of times you iterate through a specific loop, you could count the levels of recursion of achieved by a specific function, or you could count something slightly more arbitrary, such as the number of times you move an entry to another place in the list and then shift everything over. My post was in response to a post that suggested that we looked at the several sorting algorithms (including insertion sort, bubble sort, merge sort, shell sort, and quicksort), and there was a link to a website comparing the running times of these algorithms. I was simply saying that we would have to look at the number of "steps" made rather than at the running times, but that a "step" would be difficult to define in many of these sorting algorithms, especially since in some of these algorithms work by repeating a basic action that does not fulfill the requirements for a valid "step" as described in the OP. One example would be mergesort, which divides the set into two sublists, recursively applies mergesort on them, and them merges them together by repeatedly comparing the two lowest elements (one from each sublist) and then moving the lower element back onto the original list. Also, as I understand it, the OP does not specify an algorithm for sorting the DVDs. It must just be assumed that the individual somehow makes the fewest number of moves before achieving order (or if it helps to think of it this way, the best algorithm for any arrangement rather than the same algorithm for all). We can ask BMAD to clear this up, but even if it is wrong, my interpretation would also make a pretty interesting problem. If we follow your algorithm (as I understand it), then with the following arrangement of 11 DVDs: BADCFKEHGJI ABDCFKEHGJI (move A to the first position) ABCDFKEHGJI (move C to the third position) ABCDEFKHGJI (move E to the fifth position) ABCDEFGKHJI (move G to the seventh position) ABCDEFGHKJI (move H to the eight position) ABCDEFGHIKJ (move I to the ninth position) ABCDEFGHIJK (move J to the tenth position) We make 7 moves. However, there is a simpler solution: BADCFKEHGJI ABDCFKEHGJI (move B to before A) ABCDFKEHGJI (move C to before D) ABCDEFKHGJI (move E to before F) ABCDEFKGHJI (move G to before H) ABCDEFKGHIJ (move I to before J) ABCDEFGHIJK (move K to the end) This only takes 6 moves, and I would say that it cannot take less then 6, regardless of what sequence of moves we make or what algorithm we follow. However, I have no proof, and I think that is the problem.
  14. Unfortunately, no. Keep in mind that these are computer algorithms. First of all, you can't really get a "number of moves" from a computer algorithm. Also, not all of these algorithms follow the conditions described in the OP. For example, quicksort and mergesort recursively divide the set into subsets and then recombine them. Heapsort and shellsort repeatedly swap entries until the set is ordered. Only insertion sort and bubblesort (and perhaps selection sort if it is implemented unusually) make valid "moves" as they are defined in the OP, which is that an item is repeatedly taken and moved to a new place (in between two other items) - and none of these algorithms are efficient in most cases. That being said, none of these algorithms are guaranteed to make the most efficient moves. These computer algorithms are only optimal for sorting abstract data structures in the shortest amount of time but not in the smallest number of "moves." Making a "move" (ie. swapping or moving entries) for a computer is inexpensive and will be done often in order to simplify the problem even if it is not part of the most efficient solution (by which I mean solution with the fewest moves). I guess a better way to think of it would be if a computer had to calculate how to optimally sort an arrangement of physical items where making a "move" was extremely expensive. For example, if we were sorting heavy storage containers.
  15. "Best moves" and "worst disarray" seem interrelated. The worst disarray for one algorithm might be duck soup for another. (Duck soup means easy - an Americanism.) Given that the individual always rearranges the DVDs in the most efficient way possible, what is the highest number of moves it takes for any initial arrangement? (The individual may not use the same algorithm on every arrangement. In fact, one should not assume that the individual follows any one algorithm at all). Edit: From a previous post:
  16. You are absolutely right. In fact, I do remember trying to arrange 5 in a way that would require 3 moves when we were trying to reach the n-2 upper bound. I suppose it changes starting at 6, but I am hard pressed to find an acceptable explanation. We may have to resort to a programming solution as you said. One idea would include create an array of length n! which is to contain the fastest possible solutions (# of steps) corresponding to each lexical permutation of 123...n, and a function which hashes permutations to the correct indices on the array. Then, iterate through the permutations, conducting a breadth first search on each unsolved item which terminates when 1234 or 4321 is found, and then filling in the solutions array for the current permutation as well as for all permutations encountered in the intermediate steps. Finally, take the max value of the solutions array. We can do this! It would be nice if this problem got the other members' attention. It is a rather challenging problem.
  17. Are you sure we are rounding up? and of course we are ignoring the trivial cases of 1 dvd and 2 dvds. Yeah, it doesn't match the pattern for 1 to 3 DVDs. But here, try to reorder this in less than six moves: BADCFKEHGJI If you can, then maybe we are supposed to round down. The problem is that I don't know whether it is a lack of skill on my part that I can't unscramble in less than ceil(N/2) or if that is the true answer. I am curious as to how this turns out. Hopefully, some of the great minds of Brainden put their heads together and come up with a solution.
×
×
  • Create New...