Nina Slanevskaya. Interdisciplinary Neuroscience
русский
английский

Neuromorality

 

 

 

The Clash of Social and Inborn Moral Values

Neither a materialist neuroscientist, nor a non-materialist neuroscientist will deny that people have social thinking by nature. It can be considered a widely recognized fact (Heberlein, et al, 1998). Some neuroscientists insist that it is possible to describe the special structures of the brain that participate in social thinking. According to Adolphs, a “social” brain consists of ventromedial prefrontal cortex (social reasoning and decision-making), amigdala (fear, distrust, reading theinformation on the face), right somatosensory cortex (a body state during socializing; empathic reaction), insula (shares functions with the somatosensory cortex), cingulatecortex (finds errors), visual association areas in the temporal cortex (participate in emotions and influence the body state). Some structures in the hypothalamus, thalamus, and brain stem also participate in social thinking (Adolphs,1999). People often and involuntarily imitate the behaviour of others in their social surrounding and guess what is expected fromthem. Neuroscientists explain it by the presence of mirror neurons, emphatic reaction, and ToM. People have thebrain mechanisms which help them to play a social role which is assigned by society. The psychological experiment carried out by Philip Zimbardo in the Stranford University in 1971 shows the readiness of people to play their social roles. Students were divided into “prisoners” and “warders” at random (Zimbardo et al, 2000). “The prison” was made in the basement of the faculty of psychology at Standford University. Soon the students began to feeland behave like warders and prisoners. Many “warders” began to demonstrate sadism, and many “prisoners” became passive and depressed, though there were no reasons why the “prisoners” could not refuse from participation in the experiment if they felthumiliated. Instead, they diligently played their social roles. None of them had been a criminal or a sadist before the experiment. Social factors made them feel criminals and sadists. A similar dependence on the social framework was seen in the series of socio-psychological experiments made by Stanley Milgram in the 1960s and 1970s (Milgram, 2009). Milgram conceived his experiment to assess the readiness of people to surrender to the authorities, giving them the task that would be against their conscience. He wanted to understand why so many Germans had agreed to perform cruelties under Hitler, and what was the psychology of mass immoral behaviour. To the surprise of many Milgram’s colleagues, who did not believe that normal people (“teachers” in the experiment) would agree to continue participation in the experiment, if forced to increase punishment (voltage of electric shock up to 450 volts) to other people (“learners”) for their mistakes, the majority of “teachers” continued and increased the punishment up to 450 volts irrespective of their real professions, gender, and age (i.e. 65% of “teachers” instead of 1-2% as Milgram’s colleagues had thought, and it didn’t matter if there were only women or men in the groups of “teachers”). Though the “teachers’ showed their psychological discomfort (nervous laughter, sweating) while they were increasing the punishment (voltage), nevertheless they did not make up their mind to interrupt the experiment and to rebel against the social authority of the scientist, who had a legitimate power to carry out such experiments in their eyes. However, they did not increase voltage if the scientist went out of the room for some minutes. It means that they did not want to hurt other people if the authority did not press them to behave immorally. But placed in a certain social framework they started to play their social role in the name of “scientific progress and mankind” or whatever.
When people are provided with ideological and social legitimacy, and they have institutional support, the majority of them seems to prefer following social norms to their own moral reflection. The pressure of immoral social surrounding makes them doubt or suppress their inborn moral values. Social norms are usually spoken in moral terms, so it becomes rather difficult to separate moral norms from social ones and to judge on the moral basis.

Victoria McGeer asserts that people are capable of discerning the violation of moral behaviour from the violation of social norms (McGeer, 2008). Analyzing neurological basis for the possibility of moral behaviour of psychopaths and people suffering from autism, McGeer distinguishes socially approved behaviour and morally approved behaviour. A moral action is not determined by prescribed social norms and does not depend on the permission or approval of official authorities. Social norms of behaviour, on the contrary, have a temporary character and are defined by the norms existing in that society. If the norms change, the previously condemned behaviour is no longer considered to be immoral (McGeer, 2008). In other words, on the one hand, there is the set of some absolute moral rules, which exist in all centuries and for all people, and, one the other hand, there are temporary social norms presented as moral ones, which exist in the particular society. McGeer, Kennett, and Fine state that adult psychopaths and children with psychopathic symptoms do not feel the difference between the actions based on conventional norms of behaviour in the society and those based on universal moral norms (Kennett, Fine, 2008; McGeer, 2008). Frederique de Vognemont and Uta Frith think that such division between conventional social norms and universal moral norms is a great discovery in the study of moral thinking (Vignemont, Frith, 2008). The difference is understood even by three-to-four-year-old children, and it is a cross-cultural phenomenon. Vignemont and Frith agree with Nichols and Folds-Bennett that people usually consider something “moral” if it has universal and permanent moral value and “social” as something dependent on thesocial context and power (Nichols, Folds-Bennett, 2003).

For those who have doubts what is moral the famous German philosopher Immanuel Kant (1724-1804) suggests the formula based on three principles, which leads to moral actions irrespective of the epoch and social system. The principles are the following: 1. Good will (no selfish interest in the moral action); 2. The universalizability of an action (a chosen action will become a universal law applied to others and to oneself); 3. A human being is an end in itself (the respect to the intrinsic worth of a human being).

Human beings, unlike animals, can create different social organizations to their taste, irrespective of natural surrounding, and this particular organization of social life will demand special rules of behaviour. Thus, we can create any social system we like and think good for us. The matter seems to depend on the knowledge of our human nature, and whether there is a powerful group, which can enforce its principles of social organization on the rest of us.

 

Ethical theories

There are ethical theories of the first order (how we should behave) and ethical theories of the second order (meta-ethics, i.e. theorizing about ethical theories) . Among the first-order theories we can discern three main groups: (1) duty-based (e.g. Kantís ethics); (2) consequentialist (e.g. Benthamís utilitarian ethics); (3) virtue theories (e.g. Aristotleís ethics).  

(1) Duty-based ethical theories assert that acting morally means acting according to our duties (we ought to perform some actions disregarding consequences, which might follow them). The motives for actions must be “pure” and cannot include any calculated benefits. The word “duty” actually means a morally necessary thing to do, which you also want other people to do to you, and which can be regarded as a universal law for all; all people will behave like that. Happiness cannot be a universal moral principle because a person may want to become happier at the expense of other people’s unhappiness.
(2) Consequentialist ethical theories are based on the principle of the greatest beneficial consequences of the action: “good” is what brings the greatest total happiness.
(3) Virtue ethical theories focus on the character of an individual and his personal life unlike the previous ones focusing on the rightness or wrongness of particular actions. Happiness comes from coping with life’s problems morally, which is due to the acquired virtues.

The ethical theories of the second order (meta-ethical theories) can be divided into two broad groups: ethical realism and ethical anti-realism.
Ethical realism presupposes the existence of objective moral truths.
Ethical anti-realism, on the contrary, claims that there are no objective moral truths at all.
There are two main groups of ethical theories belonging to realism: ethical naturalism (objective moral properties exist, and they are reducible to non-evaluative terms or justified empirically on the basis of observation) and ethical intuitionism (moral properties are objective, they are irreducible, no need to use any other non-evaluative words). And there are three main groups of ethical theories belonging to anti-realism: subjectivism (moral statements are not objectively but subjectively true), non-cognitivism (moral statements are neither false nor true), and nihilism (moral statements are false).

 

Ethical naturalism and ethical intuitionism in neuroscience

Ethical naturalism in social neuroscience should be understood as ethics which asserts that healthy brain structures are equivalent to moral behaviour, meanwhile ethical intuitionism asserts that a human has inborn mental (or/and spiritual) quality called moral intuitions.
Ethical intuitionism is based on inborn moral intuitions. Intuitions are defined by Huemer as “mental states in which something appears to be the case upon intellectual reflection (as opposed to perception, memory, or introspection), prior to argument” (Huemer, 2005: 232). We have intuitions (“intellectual appearances”) about certain abstract truths similar to perceptual experiences (“sensory appearances”) about the physical world. Our intuitions are merely the form of our awareness: we are directly aware of moral facts. (Huemer, 2005). Intuition may fail sometimes because it can be affected by cultural, ideological, and religious indoctrination, but human beings are subject to making mistakes in all fields of human activities, and intuition is not the exception from the rule.
The interpretation of moral intuition by the neuroscientist and ethical naturalist Joshua Greene is quite different from Huemer’s. According to Greene moral intuition is based on emotions and basic instincts and is genetically inherited, “The emotions most relevant to morality exist because they motivate behaviors that help individuals spread their genes within a social context” (Greene, 2008: 59). “The theory of reciprocal altruism explains the existence of a wider form of altruism: Genetically unrelated individuals can benefit from being nice to each other as long as they are capable of keeping track of who is willing to repay their kindness” (Greene, 2008: 59). The materialist neuroscientist Greene thinks in the framework of Darwin’s theory and insists that Kant’s deontology cannot be considered as moral philosophy because people giving deontological answers show an emotional reaction in the brain while it is being scanned, i.e. there is the activation of brain structures responsible for the emotional reaction when they give deontological answers; they have no time for moral reflection necessary for philosophy (Greene, 2008). Meanwhile the consequentialist decision of a moral dilemma shows the activity of the brain in the areas responsible for cognitive thinking. Greene comes to the conclusion that Kant invented his deontological theory trying to rationalize moral emotions. Greene expresses the idea that deontology is a kind of moral talking caused by a strong feeling indicating that certain things must not be done (Greene, 2008). Greene emphasizes the fact that the answers given from the deontological position are much quicker than those given from the consequentialist position because consequentialist decisions demand more time, and they cannot be made on the intuitive and emotional level. To support his point of view, Greene presents the neurobiological results of scanned brains, which also demonstrate people’s different attitude to personal and impersonal moral dilemmas. Personal moral dilemmas cause greater activity in three areas connected with emotions: posterior cingulate, medial prefrontal cortex, and amygdala, and there is also a greater activity in the superior temporal sulcus. Moral dilemmas which are not connected with the person himself are accompanied by the greater activity in two classically cognitive areas of the brain: dorsolateral prefrontal cortex and inferior parietal lobule.

John Mikhail does not agree with Greene and sees another basis for moral intuition (Mikhail, 2008). Mikhail is sure that the human brain works within a computational “moral grammar”, which is similar to other “mental grammars” in other spheres of human activities such as language, music, recognition of faces, etc. A quick moral answer is caused by cognitive dissonance in the brain due to the computational moral grammar (Mikhail, 2008). So Greene’s conclusion that quick deontological answers have no cognitive background is wrong.

Perhaps, moral action, in principle, cannot involve the calculation of advantages even for the majority’s benefits. There is a fundamental difference between a rational decision (the calculation of advantages for the majority’s benefits) and a moral decision, which disregards any advantages for anyone except a moral duty.

The materialist neuroscientists put forward different theories concerning moral thinking and mentioning the areas of the brain involved in this process. Moll lists the areas, which, if damaged, worsen moral thinking (Moll et al., 2005): anterior prefrontal cortex (aPFC), dorsolateral prefrontal cortex (DLPFC), ventral sectors of prefrontal cortex (vPFC), ventromedial sectors of prefrontal cortex (vmPFC), lateral orbitofrontal cortex (lOFC), medial orbitofrontal cortex (mOFC), posterior superior temporal sulcus (pSTS), anterior temporal lobe (aTL), hypothalamus, septal nuclei, basal nuclei and neighbouring structures, and other limbic and paralimbic structures.
Moll defines the specific problems of moral thinking and behaviour accoding to the damaged area of the brain. For example, if the ventromedial sectors of prefrontal cortex are damaged, a human being lacks the feelings of proudness, embarrassment, and regret.

The following hypotheses of ethical naturalism are worth mentioning: Pfaff’s hypothesis of the Golden rule (Pfaff, 2007); Moll’s hypothesis of the Event-feature-emotion complex framework (Moll et al., 2008); Greene’s hypothesis of the Conflict processing in moral judgments (Greene, 2008); Moll, de Oliveira-Souza and Eslinger’s hypothesis of Moral sensitivity (Moll, de Oliveira-Souza, Eslinger, 2003); Blair and Cipolotti’s hypothesis of Social response reversal (Blair, Cipolotti, 2000); Wood and Grafman’s hypothesis of the Structured-event-complex framework (Wood, Grafman, 2003); Lough, Gregory and Hodges’s hypothesis of the Impairment of the theory of mind mechanism in sociopathy (Lough, Gregory, Hodges, 2001) Moll is, perhaps, the most prominent researcher in this field. His hypothesis of the Event-feature-emotion complex framework proposes the connection between cognitive social activities (events) and emotional states (emotions), where social characteristics (features) are interwoven into one whole (complex) (Moll et al., 2008). Hence, the name of the hypothesis is “event-feature-emotion complex framework”......(the description of theories in Nina Slanevskaya "Brain, Mind and Social Factors", St.Petersburg, Centre for Interdisciplinary neuroscience, 2014)

Summary:
(1) Neuroscientists working within ethical naturalism explain the objective existence of human moral thinking by inborn neurobiological characteristics of the healthy brain, which “directs” the mind, and believe that moral thinking developed in the evolutionary process because social moral norms helped people living in groups to survive.
Neuroscientists preferring ethical intuitionism consider moral thinking as inborn mental or/and spiritual quality of a human. Unlike ethical naturalists, they assert that the mind “directs” the brain, and the brain changes under the influence of thinking process, so a moral choice depends on the mind, but free will and not the deficiency of brain unless it is considerably damaged.
(2) Both the parties agree that moral thinking is an objective process and that the social surrounding produces its influence upon human moral thinking.
After defining their positions to physical and mental substances (ethical naturalists are materialists and monists, but the majority of ethical intuitionists are dualists), neuroscientists suggest their hypotheses in which one can see the different understanding of human responsibility for immoral actions.
(3) If moral thinking is defined by the neurobiological work of the brain, then a person cannot be responsible for his immoral actions. The concept of free will disappears due to the fixed way of behaviour determined by the peculiarities of a particular brain.
(4) If moral thinking depends on mind and free will, then a person bears all responsibility for his actions.
After getting such different scientific conclusions based on different ontologies on brain and mind, the society has a choice of the implementation of scientific recommendations. What are the social implications in both cases?
(5) If we chose the conclusion of ethical naturalism, the tendency would be to develop the medicated correction of an “immoral” brain; to improve genetic characteristics of a human; to implant devices into the brain of criminals to control them; to scan the brains of people to find inborn neurobiological deficiencies before taking them to certain jobs, i.e. to find “immoral” individuals who give an unusual reaction of brain structures when answering the questions.
However, social norms are not necessarily moral norms. There are neither perfect people, nor perfect social systems. Political disobedience of citizens is, as a rule, a moral phenomenon because they consider it their moral duty to oppose the authorities in order to improve the existing economic and political system for all people. Those individuals who were against slavery were much more moral than the majority of people enjoying slaves’ work in the slavery society. Who will define moral criteria? Who will define the moral state of the brain? Who will keep this information? Evidently, it will be in the hands of power elites, which will never allow any criticism. To criticize the power will be immoral.
(6) If we believe in two substances, as ethical intuitionists do, the picture will be different. Ethical intuitionists would recommend the improvement of thinking abilities through education, religion, art, literature, meditation. However, there is another danger: if people enter spiritual practices deeply enough, they may prefer staying at the individual level to being involved into economic and political life of the society, because the aim of spiritual practices is to liberate yourself from your social and individual problems.

 

Socio-politico-economic system which suppresses natural neuromorality

Neuroscience confirms that a human with a normal brain cannot but think about anything he comes across in a social and moral ways, and inborn morality and adjustment to social norms compete. Moral judgments penetrate all our life including international policy. We react empathically to the events at the other end of the world feeling the sufferings of unknown people while watching them on TV and get angry with unfairness towards them. Moral anger forces people to go to the extreme.
Analyzing the economic practices at present, the neuroeconomist Paul Zak gives an example of the suicide committed by Clifford Baxter, the former vice chairman at the “Enron” company, the seventh largest corporation in the USA. Clifford Baxter started complaining to Jeff Skilling, the chief executive officer, about the inappropriateness of their business practices towards the end of the 1990s. In 2001 he resigned, and in 2002 he committed suicide. Baxter was known to be a successful man and a man of high morality: he had a happy family life, he served with distinction in the military and always criticized the company’s ethical transgressions and legal abuses. Thinking about the reasons why the majority of employees at the Enron kept silence and did not support Baxter, Zak puts forward the following ideas (Zak, 2008: 260-261):
1. “...the process of economic exchange values greed and self-serving behaviors, and inadvertently produces a society of rapacious and perhaps evil people”. Modern economies are dehumanized;
2. “…there could be a selection bias in which amoral greedy people were hired in key posts, and this behavior filtered down to other employees”;
3. Most people behave ethically most of the time, “nevertheless, in the right circumstances, many people can be induced to violate what seems to be an internal representation of values that holds unethical behavior in check”.
Unethical senior management introduced such a system of compensation, incentives and other company’s procedures that employees were encouraged to behave unethically to one another and to clients. Institutional environment encouraged immoral behaviors.

What Zak writes about the Enron is true for any social group, let it be a company, university, non-governmental organization or government itself. Social adjustment competes with internal representation of moral values that holds unethical behaviour in check, and it is much more difficult for inborn morality to win if the head of social structure is an immoral person who chooses people like himself to manage the company or the government. Unfortunately, conformity to senior immoral management is not punished by law in the modern society.

The neuroscientists Tania Singer and Nikolaus Steinbeis consider that there are two very important motivations for decision making: fairness and sympathy. The disregard of social fairness makes people angry and provokes the desire to punish the violater of the moral value of social fairness. Sympathy, on the contrary, makes them forgive him (Singer, Steinbeis, 2009).

Carrying out the experiment on the emphatic neurobiological reaction connected with fairness, Tania Singer and her colleagues found out that the emphatic reaction was noticeably lower when the dishonest partner, after the game, was subjected to pain in the subsequent experiment on empathy (Singer et al., 2004b). This lower reaction was even followed by the activation of brain structures responsible for reward and pleasure, especially it was typical for men in comparison with women (Singer et al., 2006). In other words, instead of the expected emphatic reaction, the participants felt pleasure that their dishonest partners had pain. These data are in conformity with other researchers’ experiments where participants showed the inclination to altruistic punishment of moral violators. They were ready to lose their financial reward for the pleasure to punish dishonest partners just to satisfy their moral anger. So, no wonder that we find such cases in the mass media, as described further.

The ultimatum game is popular in neuroeconomics, and the results are similar in all experiments made by researchers in different countries. Glimcher describes the ultimatum game in the following way, “Two players in different cities, who have never met and who will never meet, sit at computer monitors. A scientist gives one of these players $10 and asks her to propose a division of that money between the two strangers. The second player can then decide whether to accept the proposed split. If she accepts, the money is divided and both players go home richer. If she rejects the offer, then the experimenter retains the $10, and the players gain nothing. What is interesting about this game that when the proposer offers the second player $2.50 or less the second player rejects the offer. The result is that rather than going home with $2.50, the second player goes home with nothing. Why does she give up the $2.50?” (Glimcher, 2008).
Camerer states that it is 40-50% of the sum that is considered to be fair, and if it is 20%, the money is rejected and the game stops (Camerer et al., 2005). However, if to change the rules of the game and tell the players that they will compete for the greatest sum of money, there behaviour will change: the offered sums will reduce, and the refusals to take amounts of money smaller than 40-50% will reduce as well (Chorvat et al., 2004).

(in Nina Slanevskaya "Brain, Mind and Social Factors", St.Petersburg, Centre for Interdisciplinary neuroscience, 2014)

References
- Adolphs, R. (1999) “Social Cognition and Human Brain. Review” in Trends in Cognitive Sciences, Vol. 3, No. 12: 469-479.
- Blair, R.J., Cipolotti, L. (2000) “Impaired Social Response Reversal. A Case of ‘Acquired Sociopathy’” in Brain 123: 1122-1141.
- Botvinick, M., Jha, A.P., Bylsma, L.M., Fabian, S.A., Solomon, P.E., Prkachin, K.M. (2005) “Viewing Facial Expressions of Pain Engages Cortical Areas Involved in the Direct Experience of Pain” in Neuroimage, 25: 312-19.
- Camerer, C., Loewenstein, G., Prelec, D. (2005) “Neuroeconomics: How Neuroscience Can Inform Economics” in Journal of Economic Literature, Vol. XLIII: 9-64.
- Chorvat, T., McCabe, K., Smith, V. (2004 ) “Law and Neuroeconomics” in George Mason University, School of Law, Law and Economics working paper series, Social Science Research Network Electronic Paper Collection.
- Christian, D. (2008) “The Cortex: Regulation of Sensory and Emotional Experience” in Noah Hass-Cohen and Richard Carr (eds.) Art Therapy and Clinical Neuroscience, London and Philadelphia, Jessica Kingsley Publishers: 62-75.
- Damasio, A. (2006) Descartes’ Error, London, Vintage.
- De Waal, F.B.M. (2008) “How Selfish an Animal? The Case of Primate Cooperation” in Paul J.
- Filipenko, M.L., Alekseyenko, O.V., Beilina, A.G., Kamynina, T.P., Kudryavtseva, N.N. (2001) ”Increase of Tyrosine Hydroxylase and Dopamine Transporter mRNA levels in Ventral Tegmental Area of Male Mice under Influence of Repeated Aggression Experience” in Molecular Brain Research, 96(1-2): 77-81.
- Gallese, V. (2003) “The Roots of Empathy: The Shared Manifold Hypothesis and the Neural Basis of Intersubjectivity” in Psychopathology, 36: 171-180
- Gazzola, V., Aziz-Zadeh, L., Keysers, C. (2006) “Empathy and the Somatotopic Auditory Mirror System in Humans” in Current Biology, 16: 1824-1829.
- Glimcher, P.W. (2008) “The Neurobiology of Individual Decision Making, Dualism, and Legal Accountability” in C. Engel and W. Singer (eds.) Better Than Conscious? Implications for Performance and Institutional Analysis, Strüngmann Forum Report 1, Cambridge, MA, MIT Press, retrieved 01.02.2012.
- Greene, J. (2008) “The Secret Joke of Kant’s Soul” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 35-79.
- Hartling, L. (2007) “Humiliation: Real Pain, a Pathway to Violence” in Brazilian Journal of Sociology of Emotion 6(17): 466-479.
- Heberlein, A.S., Adolphs, R., Tranel, D., Kemmerer, D., Anderson, S., Damasio, A.R. (1998) “Impaired Attribution of Social Meanings to Abstract Dynamic Geometric Patterns Following Damage to the Amygdala” in Society for Neuroscience Abstracts, 24: 1176–7.
- Huemer, M. (2005) Ethical Intuitionism, Palgrave Macmillan.
- Hynes, C. (2008) “Morality, Inhibition, and Propositional Content” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 25-30.
- Kant, I. (1965) The Critique of Practical Reason, Moscow, Mysl.
- Kant, I. (1995b) Foundations of the Metaphysic of Morals, St.Petersburg, Nauka.
- Kennett, J., Fine, C. (2008) “Internalism and the Evidence from Psychopaths and ‘Acquired Sociopaths’” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 173-190.
- Kudryavtseva, N.N., Avgustinovich, D.F. (2006) Molecular mechanisms of social behavior: comments to the paper of Berton et al., in the journal “Neurosciences”, 4(6): 33-35 (the title translated from Russian).
- Lough, S., Gregory, C., Hodges, J.R. (2001) “Dissociation of Social Cognition and Executive Function in Frontal Variant Frontotemporal Dementia” in Neurocase 7: 123-130.
- McGeer, V. (2008) “Varieties of Moral Agency: Lessons from Autism (and Psychopathy)” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 227-258.
- Mikhail, J. (2008) “Moral Cognition and Computational Theory” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 81-91.
- Milgram, S. (2009) Obedience to Authority. An Experimental View, New York, Harper Perennial Modern Thought.
- Moll, J., de Oliveira-Souza, R., Eslinger, P.J. (2003) “Morals and the Human Brain: a Working Model” in Neuroreport, 14: 299-305.
- Moll, J., De Oliveira-Souza, R., Zahn, R., Grafman, J. (2008) “The Cognitive Neuroscience of Moral Emotions” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 1-17.
- Moll, J., Zahn, R., De Oliveira-Souza, R., Krueger, F., Grafman, J. (2005) “The Neural Basis of Human Moral Cognition” in Nature Reviews Neuroscience, Vol. 6: 799-809.
- Newberg, A., Waldman, M. (2009) How God Changes Your Brain, New York, Ballantine Books.
- Nichols, S., Folds-Bennett, T. (2003) “Are Children Moral Objectivists? Children’s Judgments about Moral and Reponse-dependent Properties” in Cognition, 90(2), B23-B32.
- Pfaff, D. (2007) The Neuroscience of Fair Play. Why We (Usually) Follow the Golden Rule, New York, Washington, Dana Press.
- Rizzolatti, G., Fogassi, L., Gallese, V. (2006) “Mirrors in the Mind” in Scientific American, 295 (5): 54-61.
- Sebastian, C., Viding, E., Williams, K., Blakemore, S. (2010) “Social Brain Development and the Affective Consequences of Ostracism in Adolescence” in Brain and Cognition, 72(1): 134-145.
- Singer, T., Kiebel, S., Winston, J., Dolan, R.J., Frith, C.D. (2004b) “Brain Responses to the Acquired Moral Status of Faces” in Neuron, 41(4): 653-62.
- Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R., Frith, C. (2004a) “Empathy for Pain Involves the Affective but Not Sensory Components of Pain” in Science, Vol. 303, No. 5661: 1157-1162.
- Singer, T., Seymour, B., O’Doherty, J., Stephan, K., Dolan, R., Frith, C. (2006) “Empathy Neural Responses Are Modulated by the Perceived Fairness of Others” in Nature, Vol. 439: 466-469.
- Singer, T., Steinbeis, N. (2009) “Differential Roles of Fairness – and Compassion-Based Motivations for Cooperation, Defection, and Punishment” in Values, Empathy, and Fairness Across Social Barriers, Annals of the New York Academy of Sciences, 1167: 41-50.
- Tancredi, L. (2005) Hardwired Behavior: What Neuroscience Reveals about Morality, Cambridge University Press.
- Taylor, S., Eisenberger, N., Saxbe, D., Lehman, B., Lieberman, M. (2006) “Neural Responses to Emotional Stimuli Are Associated With Childhood Family Stress” in Biological Psychiatry, 60:
- Twenge, J., Ciarocco, N., Baumeister, R., DeWall, C.N., Bartels, J.M. (2007) “Social Exclusion Decreases Prosocial Behavior” in Journal of Personality and Social Psychology, Vol. 92, No. 1: 56-66.
- Vignemont, F., Frith, U. (2008) “Autism, Morality and Empathy” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 273-280.
- Wicker, B., Keysers, C., Plailly, J., Royet, J., Gallese, V., Rizzolatti, G. (2003) “Both of Us Disgusted in My Insula: The Common Neural Basis of Seeing and Feeling Disgust” in Neuron, Vol. 40: 655-664.
- Wood, J.N., Grafman, J. (2003) “Human Prefrontal Cortex: Processing and Representational Perspectives” in Nature Reviews Neuroscience, 4: 139-147.
- Zak, P.J. (2008) “Values and Value” in Paul J. Zak (ed.) Moral Markets. The Critical Role of Values in the Economy, Princeton University Press, Princeton and Oxford: 259-279.
- Zak (ed.) Moral Markets. The Critical Role of Values in the Economy, Princeton University Press, Princeton and Oxford: 63-76.
- Zimbardo, P., Maslach, C., Haney, C. (2000) “Reflections on the Stanford Prison Experiment: Genesis, Transformation, Consequences” in T. Blass (ed.) Obedience to Authority: Current Perspectives on the Milgram Paradigm, Mahwah, NJ, Lawrence Erlbaum Associates: 193-237.

 

Nina Slanevskaya. Interdisciplinary Neuroscience

 

| ©2009 N.M.Slanevskaya I