Jump to content
BrainDen.com - Brain Teasers
  • 0


Guest
 Share

Question

got me wondering about the justifications for moral frameworks. I believe there are moral absolutes, but after thinking about it it doesn't seem logical/reasonable to believe that. Reason seems to only support subjective morality.

So, my question is this: How does one justify morals using logic and reason?

Link to comment
Share on other sites

  • Answers 64
  • Created
  • Last Reply

Top Posters For This Question

Recommended Posts

  • 0

got me wondering about the justifications for moral frameworks. I believe there are moral absolutes, but after thinking about it it doesn't seem logical/reasonable to believe that. Reason seems to only support subjective morality.

So, my question is this: How does one justify morals using logic and reason?

I think that there would be some on this forum who would say that logic and reason are the only valid means for developing morality. But I'll take a crack at my own interpretation.

Looking at things from a logical perspective, the "Golden Rule," which says, "Do unto others as you would have them do unto you," is a very logical moral guideline. It allows for a basic and (mostly) universal agreement on how people should treat each other. It's logical because if you think about the principles of "Monkey see, monkey do" or "An eye for an eye" which seem to be somewhat deeply ingrained in the human psyche, then treating a stranger (or neighbor) as you would wish to be treated is a good, relatively safe maneuver.

I certainly wouldn't want someone coming along and taking my possessions against my will and I'm not a big fan of people killing me, so it's reasonable for me to assume that my friends and neighbors likely have a similar attitude about their lives. In order to live with other humans, we have to be able to empathize with their position and consider what we would do in their place. The Golden Rule gives us a very simple and logical starting point for forming a relationship between two equals. There are other situations that get more complicated, but that's enough to get the ball rolling.

Of course, I guess you could try to say that the GR is subjective, since there are people who have different preferences from yourself, but I think that it is probably the most reasonable baseline to start from as you try to build a relationship with another person (if we assume that there are no other rules already agreed upon between these two individuals. I guess this situation could arise between two anarchistic atheists who reject all other means of determining morality... :lol: ).

Link to comment
Share on other sites

  • 0

morality is not logical. For example,

You have two choices

1.save yourself(100% chance of survival)

2.save a random stranger (50% chance of survival)

logically only chance 2 is correct.Logic tells you you must survive and reproduce.

Link to comment
Share on other sites

  • 0

morality is not logical. For example,

You have two choices

1.save yourself(100% chance of survival)

2.save a random stranger (50% chance of survival)

logically only chance 2 is correct.Logic tells you you must survive and reproduce.

I'd actually say choice one is correct. But that just proves this is subjective. Probability wise, choice two leaves you with more people (if, say, one million people were given this option), but most people feel inclined to fend for themselves when it comes to that, because our evolutionary instincts tell us WE want to survive, not necessarily everyone else.

I think it's a combination of logic and instinct, but I have absolutely nothing to back that up.

Link to comment
Share on other sites

  • 0

I don't believe in morality at all... but that doesn't mean I have no morals. Let me explain: unlike most people I don't believe in any absolute handing down of divine rules or any set of things you should do in all circumstances.

I think that morality and ethics (assuming it means what to do when there is more need than resource [a generalization] or some other dilemma) are:

1) subjective

2) mostly based on evolutionary triggered responses and deep emotions as opposed to hardcore logic

3) are hard-coded into our brains, ensuring the evolution of society -> evolution of species -> survival of genes

4) based on the individual circumstances.

The fourth item is key. I have no general answer to most dilemmas. Maybe some, but even then not all-encompassing. It really does depend on the circumstances :thumbsup: We do have an ingrained sense of what's wrong and right and most of that is based on empathy, ie, seeing other people as humans just like you

Link to comment
Share on other sites

  • 0

i believe you should do unto others as you would have done unto you in all circumstances.

if I were to steal I would expect to be punished for it,

so I see nothing wrong with punishing others for stealing or other such crimes.

if I'm good to people I expect to be rewarded, or at least to get a better sense of who I can trust,

so when people are good to me I try to reward them.

its not really complicated in my opinion.

Link to comment
Share on other sites

  • 0

For morality and evil, here's an oversimplified synopsis of what I think (from the view of evil):

Evil is just a matter of opinion. Case in point: Hitler thought he was doing good. Thing is, I think there is (at least arguably) an object definition of evil*, but no human will ever be able to see it any other way than subjectively, so the best we can try to do is unanimously agree on subject definitions of evil (I think we can all agree on genocide, for example), and try to emulate the objective structure of morality.

*Also relevant to the entire morality vs. evil thing: I often see the struggle between "good and evil" as just a struggle between self interest and societal interest, example: Cathy in East of Eden is "evil" in that she is extremely self centered and completely apathetic when it comes to others. Thing is, this doesn't apply to many cases, like again, Hitler

EDIT: haha, just skimmed some answers, and I like Unreality's response

Edited by DarthNoob
Link to comment
Share on other sites

  • 0

I think unreality has it nailed down. There is no logical/rational basis for morality. Thus, there can be no absolute morality without some type of arbiter.

It's pretty crazy to think about. For example, when you understand it, you have to admit that logically there is no way to say any of Mohandas Gandhi, Mother Theresa, Hitler, Che, Mandela, Bush, Cheney, Obama, you or I have any relative moral superiority over any other person in word, thought, character, opinion, or action. You can say many or even most people agree to some ranking, but it's always subjective.

So, you can't logically say I am wrong for taking your property or murdering you and your family or commiting genocide if any of those things were within my power and inclination(which they aren't). This isn't reductio ad absurdum nor is it an appeal to emotion except to illustrate just how crazy it is. This is just logical as unreality and I understand it. I'm grateful human nature has a tendency toward self-preservation and altruism as its main moral drives.

I really was hoping someone could come up with a morality based on logic and reason though.

Link to comment
Share on other sites

  • 0

For example, when you understand it, you have to admit that logically there is no way to say any of Mohandas Gandhi, Mother Theresa, Hitler, Che, Mandela, Bush, Cheney, Obama, you or I have any relative moral superiority over any other person in word, thought, character, opinion, or action.

??? I personally would completely disagree with this statement. if I go through my whole life without murdering someone, I don't see how that wouldn't make me morally superior to someone like Hitler who ordered the death of millions. could you expand on this?

I wouldn't say at birth that I had any moral superiority, that is we aren't born bad or good, it's something we learn. but once you commit atrocities, you lose the ability to call yourself a moral person imho.

as for using logic and reason, I guess that depends on your axioms.

let's start with the definition of a moral, and work from there shall we?

Link to comment
Share on other sites

  • 0

phillip: what Semper Rideo was saying is that to have willfully and soberly committed what you call an "atrocity", the person committing the act does not consider such an act an atrocity. You may think that "savage" island natives doing a human sacrifice is abhorrent, but to them it's part of their religion and lifestyle.

Morality, like beauty, is in the eye of the beholder :thumbsup:

Link to comment
Share on other sites

  • 0

phillip: what Semper Rideo was saying is that to have willfully and soberly committed what you call an "atrocity", the person committing the act does not consider such an act an atrocity. You may think that "savage" island natives doing a human sacrifice is abhorrent, but to them it's part of their religion and lifestyle.

Morality, like beauty, is in the eye of the beholder :thumbsup:

Exactly, Hitler thought he was doing his people a favor. Some might disagree, but he thought it was right.

Another example, On the show Scrubs, the chief of medicine had to decide whether to put one of two people in a drug trial. There was a rich man, who could survive without the treatment, but would most likely donate a lot of money to the hospital, or a poor man,who would definitely die without the treatment. In the end the doctor picked the rich man. The poor man died, but the money that was donated was used to make a prenatal care center. so did he make the right choice?

Link to comment
Share on other sites

  • 0

there is certainly grey area to be had, I'm not God, and don't claim to have special knowledge of what He wants.

never the less, I think most people would agree that human sacrifice in the manner you describe, and mass murder are immoral, period. The fact that Hitler himself did not see that does not make his acts any more moral except perhaps in his own eyes.

Link to comment
Share on other sites

  • 0
I really was hoping someone could come up with a morality based on logic and reason though.
Since most people seem to agree on most broad aspects of morality, I think you could probably come up with a morality based on consensus of opinion at least, and I think internally that's what each of us does to a large extent. Since the opinions to which we are exposed are becoming increasingly global, its not unreasonable to suppose that humankind would approach a global consensus of opinion about morality. It seems to me that we are tending to do so, and the more this tends to be the case, the less likely that you would get a modern Hitler. Perhaps one of the main reasons why religion is increasingly out on a limb these days is that religion tends to cause pockets of resistance against that consensus. But I am fairly sure that even these will be ironed out over time. The more out of step with the rest of humankind they are, the more dangerous they become, but the danger is on a small scale, and ultimately violence leads to alienation and destruction. Those which are less out of step with the rest of humankind will gradually tend to be assimilated by the steady attrition of common sense. That's my personal opinion, I don't see a significant long term future for religion in terms of affecting moral views.

So if we assume that this much is correct and humankind is headed for a consensus of opinion on morality, what will that be? Well, as Unreality stated, it's dependent on circumstances. To take a simple example, consider the grey area between self interest and serving the common good. It seems to me that serving the common good is the more "moral" of the two, but naturally we humans find a balance between the two. If we acted out of a wish to serve the common good all the time, that would seem to be the moral ideal, and such a world might perhaps be a better world. But is it workable? I think it largely depends on information. The weakness of the ideal lies in the fact that it is open to abuse by selfish individuals, and creates an environment in which selfish individuals might prosper. In order to make it work we (people collectively) need to know whenever somebody has behaved selfishly, to ensure that this behaviour is punished sufficiently to discourage it. The Iterated Prisoners Dilemma is a good simple model for this kind of system, and illustrates that the threat of swift and certain retribution is necessary for cooperation to flourish.

But now we are at odds with other human ideals such as freedom and privacy. These conflict with the notion of a perfectly functioning justice system. This now becomes a question of what kind of world we want to live in. Similar conflicts arise when considering limited resources. Do we stretch out the resources of the world between ever increasing numbers of people, or take measures to control population? These are issues related to morality, because they define "the good" that we are aiming towards. I think that perhaps might also be determined by consensus of opinion, since we could determine what people want, what kind of world we wish to live in, by comparing the views of all people. Personally I find that idea a rather depressing prospect because the desires of the majority of people are so manipulated by selfish interests that we would simply find ourselves aiming for an unsustainable existence. So who tells us what "the good" is? Some arbitrary authority?

I'm just exploring a few avenues here to illustrate that however we try to determine a moral ideal, a balance seems preferable to an absolute view, a balance between democracy and authority, selfishness and altruism, justice and freedom. The optimal balance will depend on culture, technology, resources, population, politics, and many other factors, so as long as those things are subject to change, the optimal morality will change as well.

Link to comment
Share on other sites

  • 0

Very well-thought-out comments. Thanks to everyone.

Sorry phillip1882 I wasn't trying to offend or attack anyone's personal moral beliefs. I was trying to see if anyone has thought of or can think of objective morals based on logic or reason. I am very interested in what axioms people have or would use for such.

Now going off of octopuppy's comment. It seems like you're talking about a form of social contract. Sans any higher authority that seems to be ideal, at least from a utilitarian or pragmatic standpoint. I really hope I am never involved in the process of ironing out the balance points you mentioned. I would imagine it to be an armed conflict spanning the entire globe that lasts for centuries. In fact it seems that all of the wars in history are a part of that endeavor. So, I guess I have no choice but to participate.

Link to comment
Share on other sites

  • 0

I too wish to apologise, I did take some offence to your comments, its sometimes hard to consider another persons view point as valid.

the basis for morality ultimately lies in society; if everyone acts in a self serving amoral manner, who could say what moral behaviour truly is?

but to me, a sunset would be beautiful, even if the whole world was blind; in the same regard, acting for purely selfish reasons at all times tends to lead to destruction.

it is a difficult question and one that takes some time and study to understand.

Link to comment
Share on other sites

  • 0

i have decided to try to answer your question in a more structured form.

objective -not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.

moral -of, pertaining to, or concerned with the principles or rules of right conduct or the distinction between right and wrong; ethical: moral attitudes.

now that i have that clear, we can begin.

first, let us assume that all human beings are capable of feeling pain, either physical or mental.

second, let us assume that we do not desire pain.

third, let us assume we as human beings can feel and desire happiness.

therefore, an action can be said to be objectively morally wrong if it brings pain, the more pain, the more the wrongness of the action. conversely, an action can be said to be right if it brings happiness, the greater the happiness the more right the action. clearly, murder would be wrong, as that causes the greatest pain of all, to the victim's family, and to the individual, while perhaps only giving a mild form of satisfaction to the murderer. stealing would likewise be immoral, as the person you steal from feels the pain of loss, where as you only get a small bit of happiness from stealing, if any. honest work, working for one's living, earning his way, living an upright life tends to bring the most happiness. does this make sense?

now obviously this is not entirely objective. pain and happiness are subjective things that can be influenced by perception. never the less you can derive absolute rules from this sort of reasoning.

Link to comment
Share on other sites

  • 0

phillip1882:

That's a start but happiness & pain isn't all there is to it.

I'll just play devil's advocate and suggest a few cases which don't fit your definition:

Killing an entire family (no surviving relatives) by gassing them in their sleep. No pain and suffering inflicted. It bring me intense pleasure to do this.

Keeping a child sedated with a drug which makes them deliriously happy and pain-free but turns them into a drooling vegetable for their entire life.

A patient is in intense pain. Any attempt at pain control would result in the patient's condition deteriorating and entering a phase which is pain-free but will result in his death after a few weeks. Suffering the pain for several days gives him a good chance of complete recovery. The patient knows this but does not believe that enduring the pain and surviving the condition is within his capability. He desperately wants pain relief despite the consequences. As a doctor I refuse to administer pain relief, despite the anguished protests of the patient. Eventually the patient recovers, and goes on to live a full and happy life. He thanks me for having made him suffer against his will.

Link to comment
Share on other sites

  • 0

I'm not exactly sure where I stand with all this, I'd have to spend more time considering things than I have available at the moment, but I'll play devil's advocate with the Hitler thing.

For morality and evil, here's an oversimplified synopsis of what I think (from the view of evil):

Evil is just a matter of opinion. Case in point: Hitler thought he was doing good. Thing is, I think there is (at least arguably) an object definition of evil*, but no human will ever be able to see it any other way than subjectively, so the best we can try to do is unanimously agree on subject definitions of evil (I think we can all agree on genocide, for example), and try to emulate the objective structure of morality.

*Also relevant to the entire morality vs. evil thing: I often see the struggle between "good and evil" as just a struggle between self interest and societal interest, example: Cathy in East of Eden is "evil" in that she is extremely self centered and completely apathetic when it comes to others. Thing is, this doesn't apply to many cases, like again, Hitler

EDIT: haha, just skimmed some answers, and I like Unreality's response

Exactly, Hitler thought he was doing his people a favor. Some might disagree, but he thought it was right.

Of course, since Hitler eventually committed suicide rather than face the tribunals of the Allies, did he really think that he was doing the "right" thing? It seems more likely to me that what octopuppy says here hits the crux of the Hitler issue (emphasis mine):

But now we are at odds with other human ideals such as freedom and privacy. These conflict with the notion of a perfectly functioning justice system. This now becomes a question of what kind of world we want to live in. Similar conflicts arise when considering limited resources. Do we stretch out the resources of the world between ever increasing numbers of people, or take measures to control population? These are issues related to morality, because they define "the good" that we are aiming towards. I think that perhaps might also be determined by consensus of opinion, since we could determine what people want, what kind of world we wish to live in, by comparing the views of all people. Personally I find that idea a rather depressing prospect because the desires of the majority of people are so manipulated by selfish interests that we would simply find ourselves aiming for an unsustainable existence. So who tells us what "the good" is? Some arbitrary authority?

Hitler wanted absolute power, so by turning the people against the Jews and all those who were "different" (an Emmanuel Goldstien if you will), he was able to direct their wrath for his own selfish gains. When the plan failed and he faced retribution for his crimes, he didn't stand by what he had done; he killed himself instead of standing by those decisions as "right." Wouldn't that imply that in his own calculation, he did really think that he was doing the "wrong" thing and actually didn't care enough to stop it? :unsure:

I just think that it's a possible problem when trying to use Hitler to illustrate the point.

Another example, On the show Scrubs, the chief of medicine had to decide whether to put one of two people in a drug trial. There was a rich man, who could survive without the treatment, but would most likely donate a lot of money to the hospital, or a poor man,who would definitely die without the treatment. In the end the doctor picked the rich man. The poor man died, but the money that was donated was used to make a prenatal care center. so did he make the right choice?

Since most people seem to agree on most broad aspects of morality, I think you could probably come up with a morality based on consensus of opinion at least, and I think internally that's what each of us does to a large extent. Since the opinions to which we are exposed are becoming increasingly global, its not unreasonable to suppose that humankind would approach a global consensus of opinion about morality. It seems to me that we are tending to do so, and the more this tends to be the case, the less likely that you would get a modern Hitler. Perhaps one of the main reasons why religion is increasingly out on a limb these days is that religion tends to cause pockets of resistance against that consensus. But I am fairly sure that even these will be ironed out over time. The more out of step with the rest of humankind they are, the more dangerous they become, but the danger is on a small scale, and ultimately violence leads to alienation and destruction. Those which are less out of step with the rest of humankind will gradually tend to be assimilated by the steady attrition of common sense. That's my personal opinion, I don't see a significant long term future for religion in terms of affecting moral views.

So if we assume that this much is correct and humankind is headed for a consensus of opinion on morality, what will that be? Well, as Unreality stated, it's dependent on circumstances. To take a simple example, consider the grey area between self interest and serving the common good. It seems to me that serving the common good is the more "moral" of the two, but naturally we humans find a balance between the two. If we acted out of a wish to serve the common good all the time, that would seem to be the moral ideal, and such a world might perhaps be a better world. But is it workable? I think it largely depends on information. The weakness of the ideal lies in the fact that it is open to abuse by selfish individuals, and creates an environment in which selfish individuals might prosper. In order to make it work we (people collectively) need to know whenever somebody has behaved selfishly, to ensure that this behaviour is punished sufficiently to discourage it. The Iterated Prisoners Dilemma is a good simple model for this kind of system, and illustrates that the threat of swift and certain retribution is necessary for cooperation to flourish.

I'm just exploring a few avenues here to illustrate that however we try to determine a moral ideal, a balance seems preferable to an absolute view, a balance between democracy and authority, selfishness and altruism, justice and freedom. The optimal balance will depend on culture, technology, resources, population, politics, and many other factors, so as long as those things are subject to change, the optimal morality will change as well.

At this point, you're driving into Ethics and such with views on Utilitarianism versus Rights Advocate morality. The Utilitarian approach involves choosing to do what is best for the most people (the doctor in Scrubs from p4p's example). Rights Advocate says that everyone should have an equal opportunity to their "rights," defined however society chooses. An RA would have trouble with the Scrubs example and would want to try to find some mechanism to save both. I'm not sure how he would decide what to do in the end...

There's a good example of Rights Advocacy in Season 1 of House. The new chairman of the board (Vogler) wants to eliminate House's whole department because House has four doctors only treating an average of one patient a week and often failing to save them anyway. That looks bad for the hospitals numbers so they may receive funding cuts. More specifically, in one episode, House wants to put his patient on a clinical trial which is the patient's last and best chance for survival, but Vogler refuses since it likely is already too late and the patient will die anyway. If the patient dies while on the clinical trial, then that gets a "blip" in the report to the FDA and as Vogler puts it, "The FDA eats blips for breakfast," meaning that they might not approve a safe drug if enough of these very sick people die while on it. Vogler wants to potentially help many people down the line by getting the drug approved, meanwhile, House is furious because it takes away the only chance the patient has, which he thinks is impinging on the rights of that individual.

In the end, I think that the best system that people have found is finding a balance between the two views. While maximizing the benefit to the most people, is it fair to trample the toes of a few? That can lead to some pretty horrendous (in a largely subjective way) decisions when it comes to demolishing people's houses (or planets :lol: ) for the good of a highway bypass, for instance. It can be a lot worse than that of course. Part of the equation is that we think of ourselves as individuals and I think that if there was no difference between each of us, then I think that Utilitarianism would be the obvious direction for morality. Consider that ants and bees, who exist as a collective, have no consideration for self-preservation and every action is purely for the good of the hive. If humanity eventually evolved (or developed by other means) a mechanism of collective thought and control, then chances are RA would disappear from the discussion.

Link to comment
Share on other sites

  • 0
Part of the equation is that we think of ourselves as individuals and I think that if there was no difference between each of us, then I think that Utilitarianism would be the obvious direction for morality. Consider that ants and bees, who exist as a collective, have no consideration for self-preservation and every action is purely for the good of the hive. If humanity eventually evolved (or developed by other means) a mechanism of collective thought and control, then chances are RA would disappear from the discussion.
That's an interesting point. We act out of (genetic) self-interest whereas ants and bees act for the collective good, and it is that selfish nature that gives rise to the desire for individual rights. By valuing individual rights, we are advocating selfishness, and since most people do so to some extent, we must acknowledge that selfishness has some moral value.
Link to comment
Share on other sites

  • 0

That's an interesting point. We act out of (genetic) self-interest whereas ants and bees act for the collective good, and it is that selfish nature that gives rise to the desire for individual rights. By valuing individual rights, we are advocating selfishness, and since most people do so to some extent, we must acknowledge that selfishness has some moral value.

To sort of flip your statement on its head, I think that we like to selfishly argue that we each have a contribution to the betterment of humanity. So we do all want to make humanity better as a whole, but we each think that we deserve to be a part of that process. I don't know that we can say that selfishness has a moral value so much as we can say that we want it to have a moral value. We want to be significant. If we didn't have a capacity for memory, then I also think that that would lead to more Utilitarian lifestyles. We remember great figures from history: Alexander the Great, Caesar, Genghis Khan, Napoleon, etc. and cultural figures like Homer, Dante, Shakespeare, Mozart, etc. and we like the notion that on some level we could be like them too, even if only on a smaller scale (ie. stories like, "My great grandfather was in the trenches at..." and so forth).

If we had no capacity for remembering these people or prior events, then what benefit would an individual gain from working against the whole just for himself? There wouldn't exist that Romantic idea that you "can be great just like them" that is especially prevalent in American society. In such a society, if you saw that the greatest benefit to society came from your own self-sacrifice, then what would be staying your hand? I think that in our society the thought that we should be able to contribute and continue to contribute, even if we don't help others right now, drives us toward these selfish acts. (I could be wrong about a lot of people I suppose...there are a lot of people who seem to act for their own personal selfish gains without a care about long-term sustainability of their actions. :( I kind of just contradicted myself from where I started, but I do think that there are those who act selfishly to a point because they think that they can act for the greater good later, while there probably exist others who are completely disinterested in the common good, which is fine until they decide that the best thing for them to do is run for office... :dry: )

I'm not sure where ethics and morality fit into all this because regardless of whether there exists a logical, objective basis for morality, without an agreed upon set of moral guidelines, civilization collapses. If there is no consensus, then no one can feel safe trusting a fellow human being since the other has the risk of betraying that trust for his/her own personal gain at some point in the future (like octopuppy's prisoner dilemma question :D ). So there is certainly a rational basis for the existence of morality, even if we can't seem to find an objective definition of what that is.

Link to comment
Share on other sites

  • 0

I believe that, as dawh states, logical or not a semi-consistent moral basis is required for civilization to exist. I would posit that from an evolutionary standpoint both selfish and altruistic instincts can be selected for.

Example's:

Selfishness: Male kills other males or in some other way denies their reproductive access to females thus preserving his genetic information(instinct to kill).

Altruism: Same male sacrifices his life fighting another population of humans (along with their competing genetic information) thus preserving his offspring and genetic information.

My main thought was to see if anyone can logically say any action is right or wrong. Moral judgement comes either from some universal place. Or it is subjective, and no individual or group can claim superiority. In the latter case any behavior that flies in the face of utilitarian, rights advocate, or any type of ethics may be labeled non-normative (when ethical consensus exists) or simply different but never wrong or right.

Edit: added some spaces for readability

Edited by Semper Rideo
Link to comment
Share on other sites

  • 0

Oh. I missed phillip1882's remark. Good examples octopuppy.

Would bringing about the extinction of our species be ethically neutral? It stops all future pain.

I'm glad that logic and reason are not the be all and end all of moral foundation.

God exists. He obeys the same universal moral laws we have. He communicates with us, ie most people have a conscience. We also get clarification through His prophets. Men inspired by God to tell us plainly what is right and wrong. We in turn have the opportunity and duty to test whether or not those prophets are correct.

Link to comment
Share on other sites

  • 0

Would bringing about the extinction of our species be ethically neutral? It stops all future pain.

no, it just stops all current pain, joy and happiness. (if there are no more humans there can be no more future pain for humanity.) but I'll grant, without anyone to morn our loss its hard to say who this would hurt except perhaps God. I was merely giving one example as to how you might go about trying to formulate ethical guidelines.

whether God obeys the same moral laws we do is debatable. if no one murdered, would that mean no one would die by Gods hand? (no earth quakes volcanoes etc.) if we do murder, does that give God the ability to do the same?

if not, then why does God allow evil to occur?

a better question, is something moral simply because God says it is, or does the moral exist even without God?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...