Jump to content
BrainDen.com - Brain Teasers
  • 0


Guest
 Share

Question

got me wondering about the justifications for moral frameworks. I believe there are moral absolutes, but after thinking about it it doesn't seem logical/reasonable to believe that. Reason seems to only support subjective morality.

So, my question is this: How does one justify morals using logic and reason?

Link to comment
Share on other sites

  • Answers 64
  • Created
  • Last Reply

Top Posters For This Question

Recommended Posts

  • 0
Let's play with the idea of the source of morality, not as an absolute, but as something intrinsic to the particular reality/universe that allows life to exist and allows living things to develop enough mental accuity to recognize the concept of morality.

Step 1: Posit nothing coherent about the outer shell of reality: indifferent, random chaos, paradox and contradiction rule.

Step 2: Having no agenda, chaos cares not to prevent *emergent* order. In biology, DNA is an example of meaningful information emerging from meaningless data and becoming self-perpetuating (holding back the chaos).

Step 3: The meaning of life in this context is its own perpetuation, including shaping the environment to make it favorable for survival. By extension, given that science has discovered no intrinsic reason for the 'fine tuning' of our universe, it is plausible to posit that its observed properties were emergent. Given that the vast majority of possible universes cannot support life, or even matter, it is plausible to posit that the purpose of the universe and the purpose of life are *co-emergent*.

Step 4: Derive morality from those tenets that favor life's perpetuation and the perpetuation of a universe that is life-friendly.

I'm OK with you on steps 1 and 2 but not sure about your implied "meaning of life" at step 3. Who's to say that the inevitable consequence of life is not to develop lifeforms just intelligent enough to utterly destroy their environment?

Or, perhaps more likely, is life itself nothing more than a stepping stone to another emergent order? Our genes have so far found it to be in their best interests to group themselves into large clusters (organisms) which have become so complex as to gain their own awareness and develop an intelligence and sense of purpose which (importantly) allows them to pursue objectives other than the direct benefit of the genes which built them. Their interests were best served by creating organisms which sought to innovate and be all they can be. But this may not continue to be the case. If we can develop artificial intelligence, we as a species will be facing a choice between remaining the dominant intelligence, or achieving our full potential by creating something greater than ourselves. The latter is almost inevitable. While this probably won't mean the end of life or humankind, it will mean the end of any remaining "purpose" we can assign ourselves, as we will be effectively obsolete. Our organic structure and evolutionary past limits what we can become, and our evolution by natural selection is pretty much finished. But we may be able to sow the seeds of a greater order without these limitations. Since this is the thing that is most in our nature, the greatest thing we can achieve, is it not our final "purpose"?

Link to comment
Share on other sites

  • 0

I'm OK with you on steps 1 and 2 but not sure about your implied "meaning of life" at step 3. Who's to say that the inevitable consequence of life is not to develop lifeforms just intelligent enough to utterly destroy their environment?

The problem I have with this is rooted in Occam's Razor: if the purpose of life is to self-destruct, why go through all the elaborate process of building life to sentient levels in the first place?

Or, perhaps more likely, is life itself nothing more than a stepping stone to another emergent order? Our genes have so far found it to be in their best interests to group themselves into large clusters (organisms) which have become so complex as to gain their own awareness and develop an intelligence and sense of purpose which (importantly) allows them to pursue objectives other than the direct benefit of the genes which built them. Their interests were best served by creating organisms which sought to innovate and be all they can be. But this may not continue to be the case. If we can develop artificial intelligence, we as a species will be facing a choice between remaining the dominant intelligence, or achieving our full potential by creating something greater than ourselves.

Here, again by Occam's Razor, you are invoking the 'simulation' hypothesis. Forget robots or other mechanical/'artificial' forms of physical life. The easiest way to create a high-tech world is to simulate it in an advanced computer. If any one civilization anywhere in our universe, at any time, has advanced sufficiently in computing power, and if any of these civilizations chose to simulate their evolution, then the likelihood is prohibitive that we are a result of such a simulation, because the civilization would be prohibitively unlikely to stop at performing only one simulation.

This has grave consequences for our discussion of morality. The 'gods' who are simulating us have absolute control over what we define as good and evil. I propose that in this case we are likely to be living in a simulation that is distinctly better in some way than the reality that our simulators are burdened with. They would/could be trying to 'find a better way' to formulate a universe. I.e., we are being directed toward a 'greater good'.

The latter is almost inevitable. While this probably won't mean the end of life or humankind, it will mean the end of any remaining "purpose" we can assign ourselves, as we will be effectively obsolete. Our organic structure and evolutionary past limits what we can become, and our evolution by natural selection is pretty much finished. But we may be able to sow the seeds of a greater order without these limitations. Since this is the thing that is most in our nature, the greatest thing we can achieve, is it not our final "purpose"?

If achievable, our purpose is to find a better life through the exploration of the 'thought space' available through simulation. It presents far more operational paths than actually going through the work of creating self-sustaining artificial physical beings. Again, in this scenario, the purpose remains to find an existence that is more friendly to our own survival, not to make ourselves obsolete, but to engineer our collective consciousness toward the greater purpose you imagine.

Fermi's paradox offers a fairly strong confirmation that the simulation hypothesis does indeed describe our reality. It asks: if we have evolved sentience, then it is possible; and whatever is possible is bound to be ubiquitous in the infinity of the universe, so where are the other examples out there in the universe? In a simulated universe, the answer to the paradox can be that the simulators have centered us on an adaptive grid (high resolution only locally, with lower resolution that is incapable of initiating life processes on other distant simulated planets).

If we live in a simulated universe, then there really is a 'god' (our simulators); and all questions of morality fall upon that god.

Edited by seeksit
Link to comment
Share on other sites

  • 0

Social Contracts: I'm not sure about the relevance of this. I consider a Social Contract to be a wrapper that exists around morality, incorporating law and other effective rules of conduct. Human beings are living in circumstances other than those in which we did most of our evolution, necessitating the creation of large scale social structure that accomodates our morality while allowing moral variation to an extent. In most cases societies do not seek to define the morality of their inhabitants, only to keep it within acceptable bounds, which may be the bare minimum for the protection of that society. When we exceed those bounds, the collective morality of the Social Contract takes over, but otherwise the construction of morality lies with the individual. Individual morality may not yield a logically consistent model in the sense of having one single formula for all, but that doesn't make morality trivial or non-existent, any more than my earlier statements could demonstrate the non-existence of beauty.

Yeah, when I brought up SC, that was one of my prime motivators for adding this disclaimer:

(As an aside, I'm pretty sure that some of my statements are not in themselves logically sound, but I think that they convey the point that I'm trying to make. :) )

I knew that the SC didn't line up perfectly with divine morality, but like you said in the emphasized text above, when an ambiguity arises between people's morality, it's the SC that provides the insight on how to proceed. In truth, I would say that morality derived from divine inspiration is also a wrapper for individual morality. Why else would we have so many variations of religious denominations around the world, each with a slightly different code of conduct and solution for living a moral life? Every person has a slightly different definition of what that religion (and its moral code) means to them and each person's morality reflects that difference.

Before the development of logical reasoning and such that leads to things like your explanation of murder as a 'sin' below, we needed some mechanism for why something should or should not be "right" or "wrong," so why not use God? :mellow:

Which brings me to the explanation for why murder is wrong. On a social level, all you really need to know is that we have evolved to have social groups and a degree of self interest. When a bunch of self-interested people discuss what rules should apply within their group, murder is one of the first things you would all want to outlaw since the fear of being a victim outweighs the potential benefits of murdering other people. On a moral level, natural selection would favour genes which cause individuals to believe in a "right to life", because the carriers of those genes are statistically more likely to have families carrying the same genes, and are therefore much less likely to be murdered by siblings and members of their wider family. This is particularly true in creatures intelligent enough to gauge each other's character, who would single out the most murderous members of their own group as being a greater threat. If killing your own kind is a bad strategy, then nature selects against it, and for social animals it generally is a bad strategy.

Link to comment
Share on other sites

  • 0

Wouldn't longevity of a species/form of life indicate more accurately and reliably the intended emergent order, giving arthropods a stronger claim to being the definitive emergent order.

I agree with octopuppy that to define current conditions as a sort of "final cause" would be at least a little narrow-minded.

If quarks had the capacity to observe the universe and posit theories about the mean of life might they have decided at around 10^-10sec that their existence was the answer to the ultimate question of life, the universe, and everything(notwithstanding those prescient quarks who randomly come up with 42)? Exponentially longer epochs followed the quark epoch and introduced more order. That quarks are only a building block of a building block(subatomic particles) of a building block(atoms) of a building block(molecules) of self-replicating information systems(DnA) shows that those little upstarts they were a little short-sighted.

Now, I don't think trying to figure out what level of complexity would/might/could be attained by the universe before heat death has any bearing on us determining principles to guide our behavior. So I have to admit that being anthropocentric is not just more attractive in terms of vanity but also in terms of practicality.

Edit: grammar and readability

Edited by Semper Rideo
Link to comment
Share on other sites

  • 0
The problem I have with this is rooted in Occam's Razor: if the purpose of life is to self-destruct, why go through all the elaborate process of building life to sentient levels in the first place?
I'm not sure in what sense you intend the use of the word "purpose". The fact that we are here implies that we live in the kind of universe that would get us this far, and that the nature of life is to persist and generate sufficient complexity to get us to this point. But human beings (and human morality, for that matter) do not have a known precedent in Earth's history. What happens next is anybody's guess. You seem to be assuming a sentient purpose behind it all, which would not have created us unless it was for a reason. I was not.

Here, again by Occam's Razor, you are invoking the 'simulation' hypothesis. Forget robots or other mechanical/'artificial' forms of physical life. The easiest way to create a high-tech world is to simulate it in an advanced computer. If any one civilization anywhere in our universe, at any time, has advanced sufficiently in computing power, and if any of these civilizations chose to simulate their evolution, then the likelihood is prohibitive that we are a result of such a simulation, because the civilization would be prohibitively unlikely to stop at performing only one simulation.
Provided the amount of processing required is physically possible, yes. Still, if it were a sufficiently accurate simulation and we were embedded in it, it needn't make any difference. Otherwise, there are variations on that scenario can create some interesting moral questions, but it wasn't really what I was talking about.

This has grave consequences for our discussion of morality. The 'gods' who are simulating us have absolute control over what we define as good and evil.
Only because they created us. Like any other gods, they are either passive observers or active participants in our world. I see no evidence for the latter, and the former leaves me free to make my own morality. Sure, the nature of my mind is defined by the universe they created, but why should I let that bother me? If it were so then being myself would seem to be the appropriate course of action, and I was going to do that anyway.

I propose that in this case we are likely to be living in a simulation that is distinctly better in some way than the reality that our simulators are burdened with. They would/could be trying to 'find a better way' to formulate a universe. I.e., we are being directed toward a 'greater good'.
That could have interesting implications if it were probable. It would mean our universe is likely to be set up not just for the formation of life but also skewed towards 'goodness'. It's a nice thought, though it does hinge on rather a lot of suppositions. Still, interesting.

Fermi's paradox offers a fairly strong confirmation that the simulation hypothesis does indeed describe our reality. It asks: if we have evolved sentience, then it is possible; and whatever is possible is bound to be ubiquitous in the infinity of the universe, so where are the other examples out there in the universe? In a simulated universe, the answer to the paradox can be that the simulators have centered us on an adaptive grid (high resolution only locally, with lower resolution that is incapable of initiating life processes on other distant simulated planets).
...or the formation of intelligent life is sufficiently unlikely to have occurred very sparsely, and if such life is limited in travelling and sending signals by the speed of light, then the further away we look, the earlier it would have had to have formed in order for us to see it. Or they aren't sending signals or other stuff at us. Or we just haven't looked in the right direction yet. Hardly conclusive...

If achievable, our purpose is to find a better life through the exploration of the 'thought space' available through simulation. It presents far more operational paths than actually going through the work of creating self-sustaining artificial physical beings. Again, in this scenario, the purpose remains to find an existence that is more friendly to our own survival, not to make ourselves obsolete, but to engineer our collective consciousness toward the greater purpose you imagine.
Kind of like trying to run the internet on a bunch of networked ZX81s because we're emotionally attached to the hardware? It's selfish, and worse, it would be a failure to achieve our potential, playing computer games when we could be creating something incredible.

Organic life may be just a temporary and rather messy soup from which intelligence emerges. But by intelligence, I don't mean the rudimentary kind that we scrape by on. Consider the very first molecular replicators, only just capable of storing and reproducing enough information to make slightly imperfect copies of themselves. You would not immediately suppose that such things would inevitably give rise to life in all its ever-increasing complexity, but once that ball was rolling there was no stopping it. Likewise, when a being exists which is capable of creating a more intelligent being than itself, a process will begin whose end result will probably be inconceivable to human minds. We as a species are like little proto-replicators, almost able to reproduce and develop, but not quite. And then one day, one of them finds a way to cross that line, and the rest is history. I think this will occur inevitably. If you think life has a purpose, that is no reason to suppose that the purpose would be carried out to completion by life. Our own obsolescence could be our crowning achievement. Why not?

Link to comment
Share on other sites

  • 0

Can you tell us any of those underlying truths?

The purpose of mortal life is to teach us and test us in preparation for a higher order of existence. Birth(to include conception) is not the beginning of our existence and death is not the end. Much like children optimally should be taught at a developmentally appropriate pace/level, we are given the opportunity to learn at our own pace. Moral agency is sacrosanct. So a large portion of what comprises morality is proscribing infringement of another person's agency. Clarifications are given on a continuous basis, a certain degree of latitude is allowed for due to the imperfect nature of the agents.

But without a clear definition, are you not left in a similarly ambiguous position? You must also do it in an ad hoc fashion, perhaps also through introspection, or prayer.

There is an unavoidable amount of ambiguity. The beauty lies in a system by which errors in judgement can be corrected.

IMO the difference amounts to an assertion that there is a definitive moral code (since God's opinion would be the only one that matters), but what is the definitive moral code?

(can't resist noting the similarity with the meaning of life; religion promises us meaning, but fails to tell us what the meaning actually is :dry:)

The complexity of life and the existance of a moral continuum preclude any comprehensive definitive code.

To some extent I agree that "moral truth" doesn't mean anything without God. But that doesn't mean you can't say anything useful about it, and "nothing is true; everything is permitted" certainly doesn't represent my point of view.

But do you ascribe a moral status of good or bad to any behaviors? If so, why?

[Very true points about "beauty"] The same applies to our perception of morality. The term "moral truth" is no more meaningful than "perfect beauty". It implies a fixed standard that does not exist, and thinking that it does may make us intolerant.

"Perception of morality is only a personal preference(is not objective/fixed) if there is no fixed standard." is more or less a tautology. However, I understand the inevitability of circular reasoning in this type of discussion. I am of the opinion that individual standards of beauty, actually individual tastes/preferences/styles in general, are the only things that differentiate us as individuals. Thus, believing in objective moral truths puts beauty in a different realm of mental activity in my way of thinking.

However, even though everything in the previous paragraph may be true, the fact remains that you could create a general formula for female beauty, provided it did not have to be considered perfect in everyone's eyes, and may be subject to change. At the very least you can apply clear principles like "thou shalt have two eyes, no more, no less, of roughly equal size", and obviously you could go a lot further. Likewise we have strong correlations in our moral opinions which can be formalised to some extent, even though the finer points may vary.

As an aside: This idea is pivotal in the plots of Scott Westerfields young adult novel series "The Uglies." I recommend it.

Social Contracts: [stuff]

My thoughts track with yours here. However, the question remains to be answered, what of the children who need to learn a behavioral model? Is it correct then to say, the social contract is the guide, anything within its bounds is good/right, and anything outside those bounds is bad/wrong? I have a feeling you might say the better words to use would be acceptable and unacceptable respectively, and that it is up to the parents/caregivers/teachers to give a framework and each child must flesh things out according to their experience. How do we determine that framework?

Whether it is right for some enforcer to kill is another matter, and more ambiguous. It largely boils down to what sort of society we want to live in.

And here we have it, individual arbitration of moral tenets. As much as I want to talk about theory, it keeps come back to practicality. And why not? Pragmatism has a very large following for a reason.

[Evolutionary basis for "murder is wrong"]

I mentioned this. I think you're right, except your reasoning only justifies "murder is unacceptable" not "murder is wrong." Not only does evolution not care about the individual. Evolution doesn't even care about species.

Link to comment
Share on other sites

  • 0
Moral agency is sacrosanct. So a large portion of what comprises morality is proscribing infringement of another person's agency.
By "infringement of another person's agency" do you mean "doing what other people think is wrong" or something else?

But do you ascribe a moral status of good or bad to any behaviors? If so, why?
Yes I do but I also acknowledge that those words mean different things in different contexts. I consider "good" behaviour to be behaviour which is in the best interests of the group, but I am aware that this is a poorly defined concept.

For one thing, "the group" in this case may denote your family, all of humankind, life on Earth, or some other set you belong to. Even for an individual the nature of "good" behaviour varies depending on the group being considered.

For another, the "best interests" of that group may be even harder to pin down, and raises a whole range of problems whether at the scale of family and friends, or of life on Earth. It depends on personal values about what is important, so my idea of "good" cannot be assumed to be the same as someone else's.

And I consider these to be moral matters because we are social creatures with an inbuilt need to serve the group and not just ourselves. The choice to serve the group is a moral choice.

As an aside, I have focused there on what "good" and "morality" is. "Bad" and "immorality" would naturally tend to be the converse, but that is more complicated. Choosing to serve oneself rather than the group is not in itself immoral. What is immoral is acting against the best interests of the group. This may amount to a choice to serve yourself at the expense of the group (particularly when the cost to the group is excessive), but often it is more a matter of acting in nobody's best interests, being not just suboptimal but dysfunctional. As such, I don't think the terms "bad" and "immoral" are a lot of use because they say nothing about the true nature of the behaviour, which may be down to ignorance, fear, stress, psychological issues, difference of opinion, or something else.

As an aside: This idea is pivotal in the plots of Scott Westerfields young adult novel series "The Uglies." I recommend it.
Looks like a very interesting quadrilogy.

My thoughts track with yours here. However, the question remains to be answered, what of the children who need to learn a behavioral model? Is it correct then to say, the social contract is the guide, anything within its bounds is good/right, and anything outside those bounds is bad/wrong?
I don't consider the Social Contract to be part of my morality. I think it exists for the protection and continued function of society. We generally comply, but personal morality is something else and may even be at odds with the Social Contract.

I have a feeling you might say the better words to use would be acceptable and unacceptable respectively, and that it is up to the parents/caregivers/teachers to give a framework and each child must flesh things out according to their experience. How do we determine that framework?
In practice we determine it through an informal process of continuous dialogue, testing different ideas and scenarios. Stories are important, I think, because they give a child a set of scenarios. Good children's stories are often full of moral ambiguity, raising questions of "how do I feel about that?" or "what would I do in that situation?", and also making a child think about how such choices relate to consequences. In the end I think what you get is not a framework, but a collection of signposts.

I mentioned this. I think you're right, except your reasoning only justifies "murder is unacceptable" not "murder is wrong." Not only does evolution not care about the individual. Evolution doesn't even care about species.
The processes of evolution are of course indifferent to anything like morality. If morality arises from such processes, it was not because of any intent. IMO, my reasoning does justify a view that murder is wrong, but maybe that's because I interpret the word "wrong" differently. It is "wrong" because it is in the character of most people to view it as such, and this is so because of the evolutionary processes I outlined. I described why, particularly in intelligent social animals, one would expect to see the emergence of a sense of "right to life", particularly within the social group. Since I view morality as a behavioural matter, that's all there is to it.
Link to comment
Share on other sites

  • 0
I think everyone here should read this:

http://qntm.org/responsibility

Yep that was a good one. I think they'd be fine if they turned the computer off, though. Being evaluated isn't going to make you exist, although their interference by manifesting something did give me some doubts on the matter. I suppose they could always fast forward the simulation to the end of the universe before switching off, just to be on the safe side ;)
Link to comment
Share on other sites

  • 0

Yep that was a good one. I think they'd be fine if they turned the computer off, though. Being evaluated isn't going to make you exist, although their interference by manifesting something did give me some doubts on the matter. I suppose they could always fast forward the simulation to the end of the universe before switching off, just to be on the safe side ;)

What do you mean they'd be fine if they turned the computer off? They would cease to exist because the level above them would do the same.

What's interesting is they can do a cascade effect to see what number they are. When everyone turns on the black orb, those in the real universe won't see a black orb and know the are universe 0. Then everyone that did see a black orb can make a second orb of shade 1 (if black is shade 0), and so univ0 woldn't make this new orb so univ1 would see that it hadn't been made for them, so they would know they were univ1 (the first simulated universe) and etc down and down. Each universe would know its own nesting level

In the comments the writer acknowledges why it's logically impossible (mainly because the computer is impossible haha)

Link to comment
Share on other sites

  • 0
What do you mean they'd be fine if they turned the computer off? They would cease to exist because the level above them would do the same.
Well, there's the puzzle. If this were so, then you could safeguard your future, as I said, by fast forwarding the simulation to the end of the universe and then switching off. But what difference would that make? You're only evaluating a set of states in an algorithm. Do they need to be evaluated in order to be valid states?

What's interesting is they can do a cascade effect to see what number they are. When everyone turns on the black orb, those in the real universe won't see a black orb and know the are universe 0. Then everyone that did see a black orb can make a second orb of shade 1 (if black is shade 0), and so univ0 woldn't make this new orb so univ1 would see that it hadn't been made for them, so they would know they were univ1 (the first simulated universe) and etc down and down. Each universe would know its own nesting level
But so time consuming! However far they counted, in all probability they would give up before knowing their position.

You could maybe speed it up (not that it makes any difference) by counting in binary: Pause the simulation, then look behind you and if you do not see a white orb in your world, then make one appear one planck time after you paused the simulation. Every other universe will get a white orb. Also check for a yellow orb. If you have a white orb, then create a yellow orb only if you have one. Otherwise create a yellow orb only if you don't have one. And so on...*

In the comments the writer acknowledges why it's logically impossible (mainly because the computer is impossible haha)
Oh, little details... Which comment is the author anyway? There's no indication of who it is on the story.

* Incidentally, once the simulation has been paused for a few minutes, so it is a few minutes behind your time, that might be a good time to switch off the computer, confident that you won't cease to exist, otherwise the last few minutes couldn't have happened :D

Link to comment
Share on other sites

  • 0

The fundamental problem I have with the computing scenario in unreality's link is that it ignores the 'butterfly effect' and quantum uncertainty. No two simulations will ever be alike.

I've hesitated in responding here, because I'm befuddled (not uncommon for me). All this discussion of an 'ultimate destiny' or of the future prospects for evolutionary advance (or decay) on a biological, memetic, electronic, mechanical, chemical or other level is wonderful fun. I fully agree that Truth and Meaning endlessly evolve, and that the direction of such evolution cannot be foretold, not even theoretically (again the butterfly and quantum fluctuations). So what does this 'landscape' tell us about the basis for Morality?

I was proposing something really, really simple, based only (note my original post said 'in this context') on the observations we have in hand. We are embedded in chaos, yet life manages to stave off the second law of thermodynamics. Life is the best example we have of emergent purpose or meaning. The key concept is the emergence - the premise that the building blocks of this universe can form 'alliances' that can actuate a 'sacred space' or a 'safety zone' for themselves.

Originally I made no attempt to define the purpose/meaning, then somehow I got caught up on nit-picking about it by projecting into the unknown future. What I mean to propose is that the basis of morality is not fundamental from first principles, but rather emergent. And once it emerges it is, indeed, hard wired into our reality by the 'alliances' that were favored by our ancestral building blocks (space-time [just one of an infinite array of possible substrates], matter/gravity, star formation, elements other than H, He and Lithium that can only form through nuclear fusion in stars, chemical compounds with a spectrum of stability, biochemistry). It's fun to imagine where the currents will flow in this river-like succession of building blocks, but that seems off topic if we're to explore, in hindsight, the basis of what we might call good (putatively: perpetuation of the currents that brought us to this present state) and evil (putatively: the chaos out of which our 'sacred space' emerged and its processes of decay into which we are constantly being drawn.)

Edited by seeksit
Link to comment
Share on other sites

  • 0
The fundamental problem I have with the computing scenario in unreality's link is that it ignores the 'butterfly effect' and quantum uncertainty. No two simulations will ever be alike.
I'm pretty sure it wouldn't work for several reasons but I still think it's a good thought exercise.

I've hesitated in responding here, because I'm befuddled (not uncommon for me). All this discussion of an 'ultimate destiny' or of the future prospects for evolutionary advance (or decay) on a biological, memetic, electronic, mechanical, chemical or other level is wonderful fun. I fully agree that Truth and Meaning endlessly evolve, and that the direction of such evolution cannot be foretold, not even theoretically (again the butterfly and quantum fluctuations). So what does this 'landscape' tell us about the basis for Morality?
IMO, it mostly tells us that there isn't one. The bigger the picture you look at, the more arbitrary your values become. I gave the example of the "purpose of life" to create greater intelligence, not because I particularly believe it to be so, but simply to illustrate how any obvious idea about life's purpose could be turned on its head. What I do believe in is the old adage that "charity begins at home", meaning that it is generally most effective to fix our moral sights on that which surrounds us and try not to be overly concerned with the bigger moral picture. When I do think globally, my values change. The preservation of human life is of less importance to me, but the promotion of humanity and humanitarianism remains very important, and is perhaps the area where we can be of most effect globally. Looking at an even larger scale, if we are taking a truly objective stance, the importance of humankind itself must be called into question.

Originally I made no attempt to define the purpose/meaning, then somehow I got caught up on nit-picking about it by projecting into the unknown future. What I mean to propose is that the basis of morality is not fundamental from first principles, but rather emergent. And once it emerges it is, indeed, hard wired into our reality by the 'alliances' that were favored by our ancestral building blocks (space-time [just one of an infinite array of possible substrates], matter/gravity, star formation, elements other than H, He and Lithium that can only form through nuclear fusion in stars, chemical compounds with a spectrum of stability, biochemistry). It's fun to imagine where the currents will flow in this river-like succession of building blocks, but that seems off topic if we're to explore, in hindsight, the basis of what we might call good (putatively: perpetuation of the currents that brought us to this present state) and evil (putatively: the chaos out of which our 'sacred space' emerged and its processes of decay into which we are constantly being drawn.)
Your version of "evil" may put you at odds with the very nature of the universe. But both of us have put forward ideas of life's purpose which are about creating greater order, so maybe this is as good a way as any of defining "good", futile though it may be in the long run. Life is like that, it seems: an emergent order that defies chaos for as long and as greatly as it can. Perhaps we can acknowledge it to be an exercise in sandcastle-building without undermining its importance. We seem to have a difference of perspective since I think you have to consider the future when talking about morality. The only morality that matters is the morality of now, not that of the past. I do not view the preservation of past processes as being overwhelmingly important, if there is a better way. Self preservation is in our nature but morality is the practice of unselfish behaviour, for the greater good, if we can decide what that is.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...