Skip to main content
The Psychology of Morality

M. Tarik Ozgur

May 1, 2014

When the captain of a sunken ship commands the lowest ranking crew member among the survivors in the lifeboat to be killed and used to sustain the lives of the others, I wonder: what are his thought processes? When you pass a sickly homeless person sleeping on the street and think you would like to help them, but they would probably use the money for drugs, I wonder: how did you come to your moral decision?

These situations entail the psychology of morality-an intriguing and growing topic of study. Significant and insightful research has been completed in an attempt to understand more about morality from a psychological perspective. Many questions have been raised in the realm of morality and ethics, such as: How do people think about morality? What are people's concepts of a moral person? How do people apply morality in conflict and their everyday lives?

In psychologists' attempts to answer these questions, we learn more about mankind as a whole and about our own individual selves. This article will summarize some of the most prominent studies in the field of morality psychology and use these perspectives to help attain a better understanding of who a moral person is and what they do.

Exploring differences in moral foundations between political liberals and conservatives is one interesting area of research in the psychology of morality. Graham, et al. (2009) established a definition of a moral foundation as consisting of five moral intuitions: harm/care, fairness/reciprocity, in-group/loyalty, authority/respect, and purity/sanctity. The researchers found that political liberals were likely to chiefly judge a moral situation along the harm/care and fairness/reciprocity aspects, while conservatives were more likely to use all aspects equally. The results of this study seem to agree with the ideologies of liberals and conservatives. Considering that conservatives are less receptive to changes in political policies and culture, it makes sense that they would place a high value on the authority of a government, and also that they would give more weight to protecting their in-group from outside influences. This ideology may in turn stem from a Hobbesian worldview (Thomas Hobbes, 1588-1679) in that everyone is at the most basic level self-interested, and therefore they must protect their own interests from the interests of others.

Liberals, on the other hand, would be more likely to go against the authority of the government or in-group concerns if they felt a moral injustice was being committed. This may be one reason why conservatives sometimes accuse liberals of not being patriotic.

As for the purity/sanctity condition, there is some doubt as to whether it can really be considered a moral facet. Conservatives seem to think that this is the case, but liberals on the other hand give nearly zero consideration to aspects of purity during moral judgments, as it seems to be an unrelated concept. Additionally, this idea may explain why conservatives tend to be against gay marriage, while liberals are more likely to support gay relationships.

Before moving on, let me pose another question: which action is more moral, volunteering time in a soup kitchen or keeping your vow to your wife not to go to the racetrack and gamble on payday?

Now we turn our attention to an enlightening study by Janoff-Bulman, et al. (2009) that analyzed morality in a different way: by categorizing it into two distinctive types, prescriptive and proscriptive. Prescriptive morality has a commendatory and encouraging nature, and it is characterized by taking positive action and the question what one should do to be moral. Proscriptive morality is more commanding and punishing in its aspects, and it is characterized by preventing negative action and the question what one should not do to prevent immorality. The distinction between the two can be states as helping versus not harming.

The researchers provide evidence for these two different types of morality in their studies. For example, in one experiment, participants were asked to describe what they thought it meant to be-depending on the manipulation-either moral or immoral. They had correctly predicted that proscriptive morality tended to have more concrete and defined language, so as to assist its adherents in knowing exactly what actions are wrong. Prescriptive morality's language was more abstract so as to include more possible right actions.

It was also found that people were more disapproving when someone committed forbidden acts as opposed to someone failing to perform righteous acts. This finding-that it is generally more objectionable to see an overtly immoral act than to see a failure to commit a positive act-is strong evidence for the prescriptive/proscriptive distinctions of morality.

There are many more questions about morality researchers are asking. For instance, what lies behind the rationalization of a racist behavior or attitude? How are people's moral principles influenced by the particular identity of the individuals involved? Uhlmann, et al. (2009) examined these topics. Most of us tend to think of ourselves as having a relatively stable moral standing, but this study showed that for most people, this simply is not the case. People were presented with different moral dilemmas that measured if they would support either a consequentialist solution or a deontological solution to the dilemma. According to the researchers' hypothesis, changing some details of the scenario could influence people's preference for either consequentialism (the idea that it is permissible to commit an action that would normally be considered immoral for the benefit of the greater good) or deontology (the idea that there are some actions that are morally wrong under any and all circumstances); this is in fact what they observed.

They observed that political liberals were more likely to sacrifice a white man to save a black group of people and less likely to sacrifice a black man to save a white group. The reverse effect was observed for conservatives in that they were more willing to sacrifice a black man to save a white group and vice versa. They also found that conservatives were more likely to endorse the unintended killing of Iraqi civilians by American forces than the unintentional killing of American civilians by Iraqi armed forces.

These results provides a striking example of how people's moral concepts can change when the specifics of those involved in the moral equation change, which in turn suggests that people value some lives more than others. Conservatives, in particular, seem to value people who are part of the in-group over people who are not. This study also suggests that liberals may actually devalue in-group members in an attempt to maintain a non-racist morality.

Another major question is how do religious concepts and the knowledge that one's actions are being judged affect how people behave morally? The following study (Shariff & Norenzayan, 2007) asked participants to play a game that would measure their generosity towards total strangers. The researchers discovered that people who were primed with words relating to God and religion were more generous on average than those who received no priming. The researchers also found that priming people with secular concepts of moral institutions had the same effect.

These results provide strong evidence that people will try to act more morally if they have some idea in the back of their minds, whether conscious or not, about being judged in some manner for their moral actions. This does not have to apply exclusively to religious concepts, as the results of this study show. As a matter of fact, related research conducted by Bateson, Nettle, and Roberts (2006) showed that just the indirect suggestion of being judged by other people was enough to reproduce this effect-an increase in moral behavior. Specifically, they saw a three-fold increase in the amount paid for drinks supplied on an honor system when the image associated with payment resembled human eyes versus a control image. Apparently, a pair of eyes is enough to trigger a conscious or subconscious idea of being watched and judged.

What is known about the neural activity of the brain itself with regards to moral judgments? What roles do cognitive and emotional processes in the brain play in thinking about morality? Greene et al. (2004) conducted detailed biological mapping of brain neuronal activity of individuals during their thought processes that went along with solving hypothetical moral dilemmas. It was observed that areas in the brain associated with abstract reasoning and cognitive control-regions noted included the dorsolateral prefrontal cortex and anterior cingulate cortex-were found to be more active during very difficult moral judgments where utilitarian resolutions require violation of personal moral convictions. They concluded from their work that utilitarian moral judgments rely more on cognitive processes and abstract thinking, whilst more personal moral judgments relied primarily on social-emotional processes.

This is good evidence for two different types of moral processing-cognitive or more logical, and emotional or more personal. During this study, Greene and colleagues also considered the degree of personal relatedness of a scenario with the likelihood of utilitarian resolution, and therefore the associated cognitive processes and abstract-thinking brain regions. They looked at what happens when you alter the classic moral dilemma of the runaway trolley car (Thomson, 1986) by making it either more personal or less personal. It was observed that people were more likely to engage in utilitarian calculations when the harm was less personal-in the case, that involved pulling a lever to kill a person-as opposed to when harm was more personal, in the case requiring pushing someone off of a bridge.

Morality is often connected to religion and religious values, and a natural topic to investigate is the role that sacred values may play in moral psychology. Ginges et al. (2007) examined the Israeli-Palestine conflict in an attempt to answer the question of how sacred values are involved in the resolution of political conflicts. The researchers wanted to shed some light on several issues related to sacred values in political conflict – namely, how do instrumental incentives, such as monetary gain, affect compromise with sacred values?

Interestingly, they observed that adding instrumental incentives to hypothetical deals involving the forfeiting of some sacred value actually made the subjects who held sacred values more opposed to and outraged about compromise than if they simply offered the compromise on its own. They also found that these people were less opposed to compromise if the deal also stipulated that the opposing party would give up one of their own sacred rights.

These results are important in that they show how radically different sacred values are from instrumental values. Sacred values are, for the most part, non-negotiable, except in the cases mentioned above, and it seems as though the idea of offering something of instrumental value in an attempt to negotiate the non-negotiable only serves as an insult to those who hold these sacred values. It is only when some other sacred value is offered up that we see some willingness to compromise, suggesting that sacred values cannot be compared with instrumental values, but they can be compared to other sacred values. Because of this, one may wonder whether sacred values can really be considered truly uncontestable, as opposed to simply a different type of value that is not comparable with more material principles.

An important side of the psychology of morality concerns how people perceive the moral legitimacy of authority under different circumstances. More specifically, how much of a moral investment do the participants have in the moral issues. The researchers Skitka, et al. (2009) correlated an individual's general degree of moral conviction with how much they either supported or agreed with the U.S. Supreme Court ruling on physician-assisted suicide in 2006 that prevented the Attorney General from prosecuting doctors in Oregon who operated under state law. It was discovered that the degree of moral investment a person had about this decision increased the variance of people's perception of the legitimacy of authority. More specifically, people who supported the decision with high moral conviction more strongly supported the post-decision legitimacy of the Supreme Court than people with lower moral conviction, and people against the decision with high moral conviction more strongly opposed the post-decision legitimacy of the Supreme Court than people with lower moral conviction.

These results are strong evidence for the idea that our perceived legitimacy of authority is intimately tied with our moral conviction on the issues that matter most to us. In other words, if the government made a ruling that we did not agree with, but did not care much about (for example, the ruling that pizza is a vegetable), we wouldn't have much reason to deny the legitimacy of the government. However, decisions made about issues we have a strong moral standing on tend to either strongly reinforce or strongly undermine the perceived legitimacy of authority in our attempt to reconcile our moral standing versus the decree of authority. Additionally, a study could be run to test the hypothesis that liberals would be more willing to deny the legitimacy of the government, and conservatives would be more willing to affirm the government's power.

We believe it is moral to care about others and give charity, but what motivates the moral response of giving? Some psychologists are interested in this question. Olivola & Shafir (2011) found that people were willing to give more money to a charity if they were required to run a marathon as opposed to simply attending a picnic. This finding shows that associating donations with self-sacrifice or suffering activates people's concepts of morality.

How could we measure if having concepts of justice and fairness cause people to strengthen their moral responses as well? An experiment might be set up in this manner: one group of participants would be told that they would be donating to a village in Africa that is suffering from a drought, and as a result, running low on food. The other group would be told that local rebels are constantly raiding the village and taking their food, thus depleting their food supplies. We would then measure how much people would be willing to donate to the village in each situation. The condition of need arising from a natural disaster should not activate concepts of justice and fairness; however, even if some people think suffering from droughts is an inherently unjust event, the second condition – need arising from a military-type attack – should greatly outweigh the first in terms of justice/fairness considerations. Based on the results of the Olivola & Shafir (2011) study, it is reasonable to expect that people would donate more money to the people being raided, since that condition should activate moral concepts such as justice and fairness.

What do these studies tell us about what a truly moral person is? Let's first look at the five moral foundations again, from Graham, et al. (2009). It appears that a truly moral person bases their judgments primarily on harm/care and fairness/reciprocity. This is because there can be situations where authority or in-group concerns can conflict with harm and fairness considerations, and in these situations the latter set of moral outlooks should prevail. One only needs to consider an example such as this: imagine you are asked to commit atrocities in the name of your country. It should be clear that these atrocities couldn't be justified in this way. Based on the results of Skitka et al. (2009) such a request may even lead one to deny the legitimacy of whatever authority commanded this action-which is what a truly moral person would and should do in that situation. For reasons like these, harm/care and fairness/reciprocity should be on the forefront of one's moral concerns.

Harm/care and fairness/reciprocity are also probably closely related to each other, and in many situations, one moral concept is found with the other. Consider an instance where a person beats his wife in an act of drunken violence. It's clear that harm/care are involved in this situation, but also consider that we could say that what he is doing is an injustice to his wife. Similarly, suppose an example where a banker swindles a client for thousands of dollars. It's clear this has to do with fairness/reciprocity, but we could also consider this action as a way of harming somebody else in a non-physical manner.

Research further clarifies a truly moral person as one who leads a disciplined life that builds off of the proscriptive, preventing immorality, and subsequently the prescriptive, acting morally. Such a person must be a role model for everyone else to follow. For this reason, a truly moral person is careful not to commit proscribed acts. A person who performs many good deeds, but also many bad deeds, will appear insincere and hypocritical. In other words, a truly moral person does their best to not mix the good with the bad. Once a person has built a strong moral foundation of avoiding proscribed behaviors, he or she can branch out and help others with ever-increasing frequency. Furthermore, with regards to the study on the effects of being watched, a truly moral person acts in every situation as though he or she is being watched or will be judged later on, even if they know that they are alone. This is because they know that any action they do may have consequences at some point, either immediate or distant.

Work in the area of the psychology of morality sheds some light on how a moral person behaves, however an ideally moral person does what is right with 100% accuracy, and no matter what the circumstances. This is an ideal, and very few people can be considered to be ideal moral examples. Prophets are the only people who could be considered to be morally infallible. Prophets like Moses, Jesus, and Muhammad, peace be upon them, serve as examples of perfect moral conduct. Even though none of us can hope to attain the rank of a Prophet, every person ought to strive for this ideal and be the best person they can be. If we hope to have any positive impact on the world we must first start by changing ourselves for the better. Only then can we attempt to sincerely help other people around us, starting with our close family and friends, and later on, the larger world.

M. Tarik Ozgur is a graduate of Lehigh University Physiology Department in 2013.

References

  • Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology letters, 2(3), 412-414.
  • Ginges, J., Atran, S., Medin, D., & Shikaki, K. (2007). Sacred bounds on rational resolution of violent political conflict. Proceedings of the National Academy of Sciences, 104(18), 7357-7360.
  • Graham, J., Haidt, J., & Nosek, B.A. Liberals and Conservatives. Journal of Personality and Social Psychology. 96 (5), 1029.
  • Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389-400.
  • Janoff-Bulman, R., Sheikh, S., & Hepp, S. (2009). Proscriptive versus prescriptive morality: Two faces of moral regulation. Journal of Personality and Social Psychology, 96(3), 521-537.
  • Olivola, C. Y. (2011). When noble means hinder noble ends: The benefits and costs of a preference for martyrdom in altruism. The Science of Giving: Experimental Approaches to Study of Charity, 49-62. New York, NY: Routledge.
  • Shariff, A. F., & Norenzayan, A. (2007). God Is Watching You Priming God Concepts Increases Prosocial Behavior in an Anonymous Economic Game. Psychological Science, 18(9), 803-809.
  • Skitka, L. J., Aramovich, N. P., Lytle, B. L., & Sargis, E. G. (2009). Knitting together an elephant: An integrative approach to understanding the psychology of justice reasoning. In The psychology of justice and legitimacy: The Ontario symposium (Vol. 11, pp. 1-26).
  • Thomson, J.J. (1986). Rights, Restitution, and Risk Essays, in Moral Theory. Cambridge, MA: Harvard University Press.
  • Uhlmann, E. L., Pizarro, D. A., Tannenbaum, D., & Ditto, P. H. (2009). The motivated use of moral principles. Judgment and Decision Making, 4(6), 476-491.