-
Posts
142 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Events
Gallery
Blogs
Everything posted by mmiguel
-
-
Y = any true/false statement you want e.g. the sky is blue, e.g. I am taller than four inches, e.g. next lotto numbers are X, ...etc etc Way 1: Does Da mean yes if and only if you are the True god if and only if the statement Y is true? Since the three gods know what iff means, they will have no trouble understanding what you are asking. For those less comfortable with iff, we can restate as follows. Way 2: Let C = A iff B A and B are statements (meaning either of them can be true or false). C is also a statement (can be true or false). C states that the truth of A is equivalent to the truth of B. C states that either: 1. A and B are both True or 2. A and B are both False If A and B are both True, or A and B are both False, then C is True. If one of them is True, and the other False, then C is False. Now to understand that original expression: D iff ¬L iff Y Use the associative property of iff: ((D iif ¬L) iif Y) i.e. Let Q = D iif ¬L Evaluate Q first, then evaluate Q iff Y Since iff is associative, we could have combined (¬L iff Y) first then evaluated the result against D iff .... and we would get the same answer. Using the truth equivalence concept, the statement in English can be written as: Is the truth of the statement ( the truth of the statement (Da means yes) is equivalent the truth of the statement (you are Truth) ) equivalent to the truth of the statement Y? Suppose Y is true. If the God were truthful and Da meant yes, then the God would reply Da. If the God is not truthful, and Da means yes, then the god still answers Da thanks to the (you are Truth) part of the question. This is because the god attempts to lie (i.e. flips from Da to Ja), and the (you are Truth) part flips his answer around once again (from Ja back to Da). If Da actually means no, then the god still answers Da thanks to the (Da means yes) part of the question. The God would attempt to answer Ja, but the (Da means yes) part flips it back to Da. If Da means no and the god is a liar, then it will still answer Da, following same reasoning as above. Thus regardless of what Da/Ja means, and regardless of whether the god is Truthful or lying, the question always equates the Truth of Y to Da and if Y is false, to Ja.
-
I wish to ace my midterm.
-
I wish to always obtain or experience whatever I want, and to never obtain or experience whatever I don't want, where my wants are evaluated at the time the thing is to be obtained or experienced, and as such actions are applicable per the genie code.
-
I wish to be able to travel to any point in space time (accurately), which I select, whenever I want.
-
Your wish is granted. Congratulations. I wish to know an incorruptible wish.
-
Just had a duh! moment. Theory is only second definition, theorem is first. This is why I need sleep. Commencing hibernation in t minus .....
-
The word theory refers to different things in different contexts. There are things like mathematical theories which logically deduce from axioms, and other, observations theories which are logical inferences to observations, for which we are more confident in than a hypothesis. I basically provided two definitions just now, which could be parsed out as a set of constraints on essential characteristics Our definition of definition (meta-definition if you will): is that a definition is a list of constraints on characteristics. I stated it slightly different before, but you formulate a constraint as a true-false statement, you can treat the truth of such a statement as a characteristic itself, and use the true or false value of each such constraint within an equivalence relation as I stated before. Our (meta-) definitions are consistent, and the concept of theory can fit into both of our frameworks. "Rather than dropping 200 balls and gaining useless information, if I drop 2 and come up with the theory of gravity I would say that is overall more information." You misunderstand me, and I am not arguing what you are implying I am arguing. You are saying a general result that can be applied to many things in the world, contains in it more information than a specific result that only really applies to one thing. What I am saying has nothing to do with comparing multiple observations to few. Let's say he only drops 2. I'm saying there are many characteristics he *could* choose to remember about both of these observations, and tons of information in just those 2. He could choose to observe, analyze, and remember minute variations in the trajectory (assume he has perfect vision and memory---which obviously isn't realistic). He could choose to remember a lot, but he only focuses on and remembers the properties of the observations associated with constant acceleration of the objects. He filters out all that other garbage, which is valid information, it's just not important information. Having a wider sphere of applicability does not require more information, if that were the case, every law of physics, which are believed to have practically the widest spheres of applicability compare to anything else, should be require volumes of words to represent (more information means more intricacies, i.e. more bits are needed to represent more aspects). The fact that most can be elegantly and simply stated hint that they have less information, but more usefulness per bit than do other things. 2) Ok, sorry, wasn't trying to put your def down, was just trying to show that fuzziness can be simulated with what I was saying. 3) "Hence by your definition of definition, information is lost, since information is always lost in averaging." More like filtering vs. averaging. Averaging doesn't make sense in many cases for aggregating properties. Filtering is essentially removing any specification from those properties. "The information is not lost, rather, it is left open-ended." As far as the information content required to represent a concept, leaving something open-ended is exactly the same as not allocating any bits of information to represent a specification. This is what I have been saying all along. Don't even mention color when talking about horses, and don't store a specification for color along with your concept of horse. This makes your concept of horse contain less information, and rightly so, since color typically has nothing to do with using a horse for transportation or companionship whatever else people find them useful for. The concept contains less information, because we filtered out things like color. Adding specifications on characteristics require storing additional information as part of the concept. "I.e. if we pretend 'horse' is a described by the variables W,X,Y,Z, where W and X are the necessary and sufficient biological classification, and Y and Z are the things that can differ and still allow the object to be classified as a horse, and let's pretend the mean of Y is 5 and Z is 6, then you would say people mentally use horse = (W,X,5,6), and I would say people mentally use horse=(W,X,y,z), where they recognize that y and z are variable and differ from each horse." Actually, I would say that people would mentally use (W,X). They remove all information associated with Y,Z, from the generic definition of horse, unless it becomes pertinent at some point to consider that for a specific horse. This may vary from person to person. Say for example Y = color. As part of some people's definition, they might include a specification on the range of color (e.g. should be "natural" looking colors). If they saw a really strange, bright green horse, they might think it is something other than a horse. Anyway, they would have (W is True for this object, X is true for this object, and Y is in range of natural colors [defined based on what i have previously observed]). For practical purposes, it would make no difference if they add the Y constraint or not, although I suspect most people would not.... they would rather continue to classify it as a horse, but note that maybe it was spray-painted or something (poor horse). With the (W,X) definition above, this a necessity for making sense out of the world. You chose 4 possible attributes, but is there really a limit to the amount of attributes you can apply to an object? We would run out of letters long before we ran out of possible attributes if we tried to add as many as possible to some observation we might have right now. One of my points is that this filtering prevents us from having that problem. This filtering is the same as abstraction, the same as generalization, and is the result of taking massive amounts of information and removing pieces of it, leaving only what is important to us as humans. It is a simple form of lossy information compression. This is not all done within our brains, but a neat thing is that we do have influence and power over the final result. We can choose to define a new concept by selecting new attributes, and as we see and observe new things, we build a library of these concepts (ontology if you will). 4) " I still do not see how information is necessarily lost, especially using my above definition of definition. I don't think something can be lost that would never have been given in the first place. If I see a horse and it is brown and I tell my friend and do not tell him it is brown, is that loss of information? I still know the horse is brown, and the horse does not cease being brown, the only thing is that the information was not propagated." We are cutting fine hairs now. An analogy to your case above might be (your eyes in this analogy = you from above, and your brain in this analogy = your friend from above). In that case, does suddenly going blind, or having a bright light being shined in your eyes correspond to a loss of information in what you might be trying to see at the time? This is a semantics argument, and one could argue either way. In both my analogy and your example, we are talking about data flow. If the information content of the data decreases as it flows from one place to another, do we consider that loss of information in the flow? I would say so. I think you think I am trying to say something other than what I am trying to say, and that for the most part, if we understand what the other is actually meaning, we probably mostly agree. 5) I was not being clear. I was essentially referring to the same concept of data flow that I mentioned above. The part where I mentioned subjectivity refers to saying that the way we choose attributes in our ontologies is subjective - there is no objective rationale for defining a horse the way we do. We define horse the way we do because we have subjectively determined that it is a useful definition to have. If we try to make ourselves more objective, we suddenly have less and less rationale to define anything the way we currently do, since the concept of "importance" vanishes as you become more and more objective. For example, say in my mind, I define favorite food = pizza. As I attempt to make myself less subjective, all the reasons for which I currently like pizza become less and less important -- I need to be able to have this more objective concept account for people who hate pizza. I would probably end up in the end with the definition of favorite food being identical to the definition of food in general. As you become less subjective, the boundary between useful and useless information disappears, and it all just becomes information. Now from our currently, "objective" perspective where all information is equally useful (or useless, depending on how you look at it), say for example we observe a data flow such as your example above, or my analogy. If we, as our objective observer selves, were to judge whether or not information is being lost in such a data flow, we would conclude that the information is not entirely flowing through. It is getting filtered. That's all I was saying there. I wasn't saying that the actual, objective properties of the horse vanish in real life or anything. I think what I am saying is not as radical as you think it is.
-
Ok no more beating then. No evidence for determinisim because errors always can be explained by randomness - agree, but on flip side, all errors can also be explained by complexity (and/or trying to evaluate something that is not well-defined). I wasn't saying it sucks that I couldn't prove I was right --- I was saying it sucks that we can't come up with a better answer other than, "we can pick whichever one we like better".
-
Yeah that makes sense... nice observation!
-
Not really, it's more like I feel that there is insufficient evidence to make a confident conclusion one way or the other. I know people say things like, "well it was published in a respected magazine, and all the leading scientists believe it", but I prefer to evaluate how logical something is for myself before blindly believing what someone else says. In most cases, I find articles published in respected magazines very logical, and find that they make sense. If something doesn't make sense to me, I don't necessarily assume it's because it's wrong, and I usually try to dig a little deeper, or simply just reserve judgment. I have not found anything to convince me that randomness exists, mostly just restatements that the popular position is X, without much reasoning. Either that or insufficient reasoning that is cleverly worded to make it seem it accounts for every possible case, when it is in fact limited to certain cases. Given that there is insufficient evidence, I can really just choose which one seems to be more in line what my other observations about the world. If I turn out to be wrong, then oh well. Everything else I've observed so far, is readily explained by the philosophy which I've chosen, as far as I can judge. And every time someone says: hey! - this thing isn't explained, I feel that I can come up with a way of showing that what they are pointing out is in fact consistent with my belief. By believing the opposite side, you are doing the same thing, you just have the benefit of having the more popular belief. I think we've beat this topic to death.
-
"I would say a definition is a specification of a set of characteristics that are necessary and/or sufficient to categorize something as that word/phrase." That is the same thing I had in mind, except I wrote it more carelessly, what do you disagree about? "I do not think removal of information is "lying to ourselves", I suppose I'm trying to make it seem like there is a contradiction, although I don't really think there is one. "Specifying every piece of information and making conclusions that are only true for those exact specifications is not very useful. " I think I said that somewhere. Well I said it's not possible, either way I'm obviously starting from an idealistic case that doesn't make too much sense for practical purposes (ideal case is every characteristic matters), and departing from that back into the everyday world (practical case is only characteristics important to us matter). I'm describing the spectrum of specificity, not saying we should be more specific in everything we do, and less abstract. "It's not ignorance, but rather generality which makes the observations (and hence learning and gaining experience in life) actually useful." My point was the realization that the act of generalization itself is equivalent to the act of removing information. Ignorance is lack of information. I thought it was ironic that something good (generalization) arises out of something usually considered bad (removing information). "And on "sameness", I don't think anyone actually uses sameness in the way you seem to be defining it. The majority of time people don't think or talk about things as "exactly the same PERIOD", but "has the same ______ (color, size, cost, etc)"." Of course not, there are relative degrees of sameness. A pastrami sandwich might be considered the same as a turkey sandwich since they are both sandwiches (here meat is not an essential characteristic. On a different level of sameness, a pastrami sandwich might be considered the same as a watermelon, since they are both food (here edibility is the only essential characteristic). As the ones creating the thoughts, we are able to choose what degree of sameness we are interested in. What I did, was take this to the most extreme level, where every detail matters. Of course no one does this in their day-to-day life. If anyone did, why would I even post this as something I thought others might find interesting? "I don't think we construct in our minds this idea of things being "the same" the way you are defining it. The "the same" in our minds is a recognition of like characteristics that are correlated with information in our experience that is useful for decision making." No, I don't think our brains start with all information and hack it away to get to where we are. Just some information. I think all that information is out there, but we filter out massive quantities of information in order to effectively act on what we find important. This filtering could happen outside of our brain, via our senses, but I think it also happens within our brains. Consider peripheral vision for example. If you are intensely focusing on something you are watching, you might not even notice the color of something that was lying in your peripheral vision. You don't choose to not notice, it happens, because you only have finite resources for thinking. Behind the scenes, some part of your brain must toss out that sensory information it's receiving from the corners of your eye. You are talking about recognition: what is recognition any way, let's think. I think it can be well represented with the vocabulary of equivalence classes. You can define an equivalence relation, and say that if two things satisfy this equivalence relation, they belong to the same equivalence class. What we have been discussing relates to the construction of equivalence relations. Assume all objects may be represented as a set of characteristics and values for those characteristics. Let's say an equivalence relation may be defined by selecting certain characteristics to include in an equality comparison. We are free to define whatever equivalence relations we want. My extreme case is: include everything. In the spectrum of specificity, this is the most specific possible equivalence relation, and the most objective. The other extreme is: include nothing. In the spectrum of specificity, this is the most abstract possible equivalence relation. Basically, the word "thing" represents the one equivalence class derived from this relation. Anything is a thing. Ideas are things. Cars are things. People are things. Everything is a thing (we even include it in the words everything/anything/nothing). In the most specific case, there is no such thing as sameness. In the most abstract case, everything is the same. Which case do you think is more in tune with reality? I would say the specific case. The more specific things, are the more information is required to represent them. Pure abstraction essentially requires no information at all --- yeah, we'll include it in the class, don't even need to look at it's properties. The fact that the properties exist, the fact that we can even zoom in on them, suggests that all that specific information is out there and real. My conclusion is that abstraction is a product of our finite processing capabilities. Not to say it's a bad thing, but that is what it is. "Secondly, generalizing often leads to the gaining of information rather than the loss of it. Generalizing allows for the specification of what information is necessary and sufficient to draw a correlation" I argue that it never does, although it allows us to focus on components of information that are more important to us. You could draw 100,000 useless correlations if you generalized as little as possible... but you wouldn't care. It is filtering out useless information (removing information), that allows us to do great things and draw correlations we care about. How useful information is to me, or to you though only matters within our own heads. "Hence by recognizing that, we gain information about the correlation itself. I.e. when Galileo dropped a wooden ball and a metal ball (or something similar, I don't remember exactly) off a tower and they landed the same time, he gained information about the general principle. By recognizing what was necessary and sufficient for this phenomena of 'sameness' (traveling the same path), he could gain information to come up with a theory to explain the phenomena. " He made a conclusion that is important to mankind. If he spent 200 days examining the intricacies in the shape of the wooden ball as compared to the metal ball, he would have gotten a lot more information than the rule above ---- it's just worthless information to him and everyone else. "Hence I would say sameness is a matter of degree rather than black/white or 1/0 (i.e. my DNA has sameness with my mom's to a degree of ~50%) and is inherently context dependent, since it requires a comparison" Your fuzzy depiction of sameness can by represented by dividing whatever objects you are comparing into subcomponents and discretely evaluating sameness. How could you even get a number such as 50% ---- (i know you did this by assuming you are half your mom and half your dad), but maybe a more rigorous approach is to compare your nucleotides, position by position to your mother's. For each nucleotide, you make a hard comparison (use the equivalence relation to include only chemical structure, removing other non-important information). Say for example, you count X that match, out of Y total. You could then say that your DNA is roughly X/Y of your moms.... much more than 50%.... somewhere I heard human vs. chimp is like 99% or something. Anyway, I think it is possible that every fuzzy comparison can be represented as the aggregation of other hard comparisons.... "They are not deluding themselves into thinking that, say, one horse is [your definition of sameness] as another, or ignoring the differences or the effects of those differences (well, okay, some people do...but that's called denial ), rather they take in that information (i.e. one horse is brown, the other black, etc) and store that information (not taking into account memory loss...that's a whole different topic) , but they're recognizing that one horse shares the same characteristics that define it as being a horse." Agree, I'm not saying their sensory information disappears. I'm saying they generate the concept of "horse" (or more likely have had it previously generated at some point in their life), and use that concept instead of their actual sensory information for any practical purpose involving the horse thereafter. This abstract, generated concept of a horse, requires less information than, the real specific horse. A real, specific horse has all the properties contained in the concept definition, and way way more. It has color, hair patterns, and tons of other stuff too, which would be impractical to store. By retaining and using this abstract concept, which has less information, the person is building a model of they're observed experiences which has less information than the true source of the experience itself. This is no surprise, but it is something that most people probably don't think about. I think that understanding this spectrum of specificity is the key to understanding how things like recognition work. "By generalizing the object as "a horse", it allows an efficient passing on of certain pieces of information while not specifying others. Would that person have passed on that information anyways without the concept of sameness? I.e. if you had to specify exactly what characteristics every object you want to talk about in a conversation, would you talk about that object? If the idea of "sameness" allows information to be passed on that otherwise that would not have been, then I would say that is not a loss. Also, the passing of information is not instantaneous, hence by using the idea of sameness you are improving the efficiency of information passage, i.e. trying to maximize the function of (information passed) per unit time. This allows you to pass more information overall (i.e. if you integrate the function of (information passed) over time, you get a higher value), hence I would say it is increasing information, not decreasing it. " Information = Useful Information + Useless Information You are saying by getting rid of useless, information, we increase information. What you really mean is we increase the potential to transfer useful information. This is absolutely true. But if you remove the boundary between useful and useless, and just consider information in general... outside of any subjective perspective... it is the loss of information.
-
They don't, they only wish for world peace. That's why they would be really confused when the announcer responds as you did
-
We cannot be certain.... that's what sucks so much.
-
Agree that this definition makes sameness pretty much useless, but sameness is an idealization. There are many useful concepts that are useless in the ideal case. When you stop caring about characteristics, and ignoring them, that is when sameness becomes a useful concept. This process, the removal of characteristics, we can recognize as abstraction, or in another word, generalization. Pick a word out of the dictionary - horse for example. What does the definition say? Any definition is essentially a list of characteristics. No definition lists every characteristic of any real object, since we wouldn't be able to process that much information. In order to get a more manageable amount of information, we selectively discard information in the form of unimportant characteristics. Almost always, the first thing to go is spatio-temporal position. I don't care where the horse is, and it doesn't matter where it is, I only care that it "is an odd-toed ungulatemammal belonging to the taxonomic family Equidae." (wiki was good enough for me: http://en.wikipedia.org/wiki/Horse). Likewise, I don't care what color it is. I don't care how tall it is, I don't care how heavy it is, etc. Why am I talking about this ---- because I feel that it provides insight into how we process information. We are allowed to use "sameness" by willingly playing ignorant and removing information from what we observe. This is also what allows us to group things together e.g. we can say two horses instead of one fundamentally unique entity and another "dissimilar" fundamentally unique entity. Without this, numbers would not make sense (if we treated everything uniquely, how could we ever count higher than one? - numbers only make sense when applied to things considered the same) It allows us to identify the ship from the OP as the same ship regardless of what it's made of (if we happened to remove such characteristics from our definition of the ship), but it doesn't contradict saying it's a different ship either. With all this great stuff that generalization/abstraction does for us, it still seems like removing information is a form of lying to ourselves though, doesn't it? As if all of the concepts and definitions we have constructed to give meaning to our world are illusions, that we artificially make due to some evolutionary programming --- recognize a predator as an object, run away and hide, recognize food, eat it... But this ignorance (removal of information -unimportant characteristics) also serves great uses, as I mentioned above. Generally it feels like a bad thing to be ignorant, and even worse to want to be ignorant. This paradoxical idea is what has me so interested. I was hoping someone else might come to a similar conclusion from my OP, but maybe not..... oh well
-
I admitted from the beginning that I find the theory agreeable... I just separated theory from the interpretation... and said that i don't like the most popular interpretation (3) - looks like we got nothing to disagree about here (4) - information within a subjective perspective can change, that is certain... that is why i like the bayesian interpretation of probability (another contentious debate). but if we attempt to step outside of subjectivity, and ask about what is really out there, real information in the fabric of existence, and not just the little bits and pieces that we have been able to process in our brains,... then whether or not such information is increasing when taking into consideration the entire universe as a system is not so certain. I suggest a truce, parting with mutual respect
-
mmiguel's uncertainty principle: There is a fundamental uncertainty in evaluating a characteristic for an object for which that characteristic is not well defined. Evaluate the characteristic: "mood" for each of the following life forms: [person, cat, insect, tree] Is the inability to evaluate the mood of a tree an argument for the existence of randomness? Now replace mood with "position", and replace the list of lifeforms with increasingly non-localized waveforms. ... and replace mmiguel with heisenberg.... Edit: just typed this due to thinking thoughts... didn't see you had responded above.
-
This should be the response at all beauty pageants.
-
They do need to occupy the same quantum state to be identical. This does not mean that bosons are identical. True sameness means that every characteristic is the same, quantum state is but one characteristic. There are characteristics that differ between two bosons that make them two bosons and not one boson. A photon leaving my computer screen has at least one characteristic that differs from a photon leaving your computer screen. Those are two bosons that are different. Wikipedia may say that there are no differences: http://en.wikipedia....tical_particles but what they are really saying, (implicitly, and perhaps without realizing it), is that none of characteristics that they care about are different. They are not actually implying that no characteristics are different. This implicit catch is behind every thought of sameness that anyone ever thinks. A neat thing to notice, is that this requires a specification of what characteristics are important enough to consider, and what to ignore. Importance itself, requires a subjective perspective. Hence Sameness doesn't exist, in the truest sense (in the objective reality, outside of our subjective noggins). But wait..... Explain why the concept is so prevalent then
-
Not the points which are important to me. (1) - I don't really care about getting into relativity after this. You were the first one to mention it, and I never said anything prior to this which is at odds with relativity (nor quantum mechanics for that matter, if you consider the theoretical model to be separate from the interpretation). This point is not one I care about. (2) - I know it does - I said it does in all of my posts about that. What I said didn't fit was the concept of position in certain cases, and the concept of velocity in other cases. If everything around us is accurately depicted by a bunch of imaginary waves super-imposing, like QM assumes, then it's no surprise that some cases at some points in time resemble the extreme cases I mentioned such that position and momentum are not entirely well-defined. Despite your best efforts, you asserted the same thing I did. (3) - I don't think anything can refute the uncertainty principle, it's something inherent when trying to interpret a wave with non-wave properties (although the non-wave properties may seem to be present in certain waveforms - when appropriate). About the refining stuff... you can't assume that the current theory will never be replaced by something else. The old theory was that the world was flat... that was tossed, not refined.... although if you think about it, it's kind of still around as an approximation.... it's practical to assume flatness when the curvature of the earth just doesn't really matter for your problem. Same with Newtonian physics... it's good enough when you don't really care about quantum/relativistic effects. Maybe one day, the same will be said of quantum mechanics. There is no real argument you can make against that. Well, I suppose you could, but I can't imagine that it would be a good one, and I probably wouldn't be convinced. (4) I feel like my other answers have already covered this on each specific front we have talked about. Since you bring up the topic of information, let me mention something else I find interesting and have thought about before ----> is there anything more fundamental to existence than information? Information is essentially the potential for difference to exist. Think of a bit, a bit is the most basic unit of information. If the universe had one bit of information in it, it would have two possible states, and it would exist. If you remove that bit, and think about what could remain... the only conclusion is nothing. Difference is what allows for existence, and information is a measure of how many possible ways things can be different. This ties into our discussion in the following way. In determinism, the amount of information in the universe is constant with time, since future states are implied by past states, and vice-versa. In non-determinism, the amount of information in the universe increases with time and with every (truly) unpredictable event. Don't use the 2nd law of thermodynamics as an argument here to say that the information in the universe must be increasing, reasoning about entropy always increasing, and entropy being a measure of the possible microstates of a system --- there is a difference in the information I am talking about and the evolution of statistical-mechanical microstates, since the 2nd law refers to "usable information" or "usable energy" as opposed to the total information capacity of a system. Unusable energy is like heat, as opposed to voltage. Unusable information is like white noise, as opposed to a transmitted signal. Unusable stuff tends to have more micro-states than usable stuff, just like a room can be messy in more ways than it can be clean. The information I'm talking about lies beyond what is usable vs. not usable, but what is possible. Anyway, I think that is the crux of the determinism vs. non-determinism. It is consistent with your non-determinist view. You say knowing all the information at one point in time will not allow prediction of the future, because by the time the future gets here, new information that did not previously exist has been added at random into the universe. I say, no, the universe isn't randomly adding information as time goes forward, and that anything that appears to be random is actually just complexity in disguise. Both of these perspectives are not inconsistent with Heisenberg's uncertainty principle which is a statement about the application of one set of concepts (position,momentum,energy,...etc), to another (waves). It is not inconsistent with with either of our views because it has nothing to do with the information of the universe increasing. It only has to do with trying to discern the meaning of position, momentum and energy out of a wave. The only reason it comes up in debates like these, is because someone (not Heisenberg I'm guessing) chose to use the word uncertainty to describe Heisenberg's observation, and uncertainty can be interpreted as something being fundamentally unknowable, and unknowability is a central dividing point behind the determinist argument. This is really the only connection <---> re-use of a word. Heisenberg's observation about the nature of waves, and applying the concepts of position, momentum, and energy to these waves, has no real direct connection to our main dividing point. Your statement: 'The argument "If we had all the information, we could predict everything that happens, hence the world is deterministic" is wrong because the premise is wrong.' is essentially just an assertion that the non-determinist principle is true. My statement: "If we had all the information, we could predict everything that happens, hence the world is deterministic" is the opposite assertion. Neither of us have any proof, nor can we attain it. What's the point of "winning" an argument if there is no real evidence to support either side?
-
I knew what you were talking about. From the article, it is easy to see how two fermions might be different -> they must have different quantum state according to exclusion principle. So.... they aren't the same... and no two fermions can ever truly be the same - proof thanks to Wolfgang P. I would have had to work a little harder if you tried bosons.
-
I wish for world peace. Who among you is willing to be a jerk?
-
If they are completely the same, how are we even able to recognize them as being 2 objects instead of 1?