Consequences, Action, And Intention As Factors In Moral .

2y ago
22 Views
2 Downloads
384.88 KB
15 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Dahlia Ryals
Transcription

Consequences, Action, and Intention as Factorsin Moral Judgments: An fMRI InvestigationJana Schaich Borg, Catherine Hynes, John Van Horn, Scott Grafton,and Walter Sinnott-ArmstrongAbstract& The traditional philosophical doctrines of Consequentialism,Doing and Allowing, and Double Effect prescribe that moraljudgments and decisions should be based on consequences,action (as opposed to inaction), and intention. This study usesfunctional magnetic resonance imaging to investigate how thesethree factors affect brain processes associated with moraljudgments. We find the following: (1) Moral scenarios involvingonly a choice between consequences with different amounts ofharm elicit activity in similar areas of the brain as analogousnonmoral scenarios; (2) Compared to analogous nonmoralscenarios, moral scenarios in which action and inaction result inthe same amount of harm elicit more activity in areas associatedwith cognition (such as the dorsolateral prefrontal cortex) andless activity in areas associated with emotion (such as theINTRODUCTIONEver since Socrates debated sophists and stoics wrangled with skeptics, philosophers have argued aboutwhether moral judgments are based on reason or onemotion. Although definitions of morality may varyacross cultures and philosophies in other ways, alldefinitions of ‘‘moral judgments’’ include judgmentsof the rightness or wrongness of acts that knowinglycause harm to people other than the agent. Thesecentral moral judgments are distinct from economicor prudential judgments based on the agent’s own interest, both because the moral judgments depend oninterests of other people and because they focus onharms as opposed to, say, pleasure. The present studyaddresses core moral judgments of this kind (Nichols,2004).To investigate whether such core moral judgments arebased on emotion or on reason, these terms must firstbe defined. For our purposes, ‘‘emotions’’ are immediate valenced reactions that may or may not be conscious. We will focus on emotions in the form ofnegative affect. In contrast, ‘‘reason’’ is neither valencednor immediate insofar as reasoning need not incline usDartmouth CollegeD 2006 Massachusetts Institute of Technologyorbitofrontal cortex and temporal pole); (3) Compared toanalogous nonmoral scenarios, conflicts between goals ofminimizing harm and of refraining from harmful action elicitmore activity in areas associated with emotion (orbitofrontalcortex and temporal pole) and less activity in areas associatedwith cognition (including the angular gyrus and superior frontalgyrus); (4) Compared to moral scenarios involving onlyunintentional harm, moral scenarios involving intentional harmelicit more activity in areas associated with emotion (orbitofrontal cortex and temporal pole) and less activity in areasassociated with cognition (including the angular gyrus andsuperior frontal gyrus). These findings suggest that differentkinds of moral judgment are preferentially supported bydistinguishable brain systems. &toward any specific feeling and combines prior information with new beliefs or conclusions and usually comesin the form of cognitive manipulations (such as evaluating alternatives) that require working memory. Emotionmight still affect, or even be necessary for, reasoning(Damasio, 1994), but emotion and reasoning remaindistinct components in an overall process of decisionmaking.In modern times, Hume (1888) and many utilitarianphilosophers based morality on emotion or sentimentvia what the former called ‘‘sympathy’’ and what contemporary psychologists call ‘‘empathy.’’ In their view,core moral judgments arise from an immediate aversivereaction to perceived or imagined harms to victims ofactions that are judged as immoral only after andbecause of this emotional reaction. In contrast, Kant(1959) insisted that his basic nonutilitarian moral principle (the categorical imperative) could be justified bypure reason alone, and particular judgments could thenbe reached by reasoning from his basic principle, allwithout any help from emotion. Although somewhattransformed, this fundamental debate still rages amongphilosophers today.Such traditional issues are difficult to settle in anarmchair, yet some progress has been made with thehelp of recent brain imaging techniques. Studies usingJournal of Cognitive Neuroscience 18:5, pp. 803–817

functional magnetic resonance imaging (fMRI) surprisingly suggest that neither Kant nor Hume had the wholetruth, and that some moral judgments involve moreemotion whereas others involve more reasoning. Forexample, neural systems associated with emotions areactivated more by personal moral dilemmas than byimpersonal moral dilemmas (Greene, Nystrom, Engell,& Darley, 2004; Greene, Sommerville, Nystrom, Darley,& Cohen, 2001). Personal moral dilemmas were labeled‘‘personal’’ because the agent gets ‘‘up close and personal’’ with the victim in most such cases. Greene,Nystrom, et al. (2004) and Greene, Sommerville, et al.(2001) formally define personal moral dilemmas, without reference to physical proximity, as cases where anotherwise desirable action is (1) likely to cause seriousbodily harm (2) to a particular person or group ofpersons (3) not by deflecting an existing threat onto adifferent party. A paradigm personal moral dilemma isthe footbridge case, where the only way to save fivepeople from a runaway trolley is to push a fat man off ofa footbridge in front of the trolley so as to stop thetrolley before it hits the five people. A paradigm impersonal moral dilemma is the sidetrack case, where theonly way to save five people is to redirect a runawaytrolley onto a sidetrack where it will kill one person.Most people judge that it is morally wrong to push thefat man off the footbridge; fewer people judge that it ismorally wrong to redirect the trolley onto the sidetrack.The question is why and how people make these contrasting moral judgments.These trolley cases differ as to whether a threat iscreated (as in pushing the fat man) or merely deflected(as in redirecting the trolley onto the sidetrack), butthere are other differences as well. For example, theagent gets closer to the victim in the footbridge caseand that the agent in the footbridge case could jumpin front of the trolley instead of pushing the fat man,whereas this option is not available in the sidetrack case.Such complications presumably explain why Greene,Sommerville, et al. (2001) admit that their distinctionbetween personal and impersonal scenarios is only ‘‘auseful ‘first cut,’ an important but preliminary steptoward identifying the psychologically essential featuresof circumstances that engage (or fail to engage) ouremotions and that ultimately shape our moral judgments . . .’’ (p. 2107).The exploratory fMRI study reported here makes a‘‘second cut’’ by evaluating three factors picked out bytraditional moral principles that might underlie thedistinction between personal and impersonal moralproblems. One classic moral theory is Consequentialism,which claims roughly that we morally ought to dowhatever has the best consequences overall (SinnottArmstrong, 2003). Opposed to Consequentialism aretwo deontological principles. The Doctrine of Doingand Allowing (DDA) says that it takes more to justifydoing harm than to justify allowing harm; thus, for804Journal of Cognitive Neuroscienceexample, it is sometimes morally wrong to commit anact of killing in circumstances where it would not bemorally wrong to let someone die by refraining from anact of saving (Howard-Snyder, 2002). Redirecting thetrolley onto the sidetrack violates the DDA insofar as itinvolves positive action that causes death and, thus,counts as killing, but the DDA is not violated by merelyletting someone drown to be able to save five otherdrowning people. The Doctrine of Double Effect (DDE),in contrast, holds that it takes more to justify harms thatwere intended either as ends or as means than to justifyharms that were known but unintended side effects;thus, for example, it is sometimes morally wrong tointend death as a means when it would not be morallywrong to cause that death as an unintended side effect(McIntyre, 2004). Pushing the fat man in front of thetrolley in the footbridge case violates the DDE becausethe agent intends to use the fat man as a means to stopthe trolley and save the five people on the main track. Incontrast, deflecting the trolley in the sidetrack case doesnot violate the DDE because the victim’s death in thatcase is only an unintended side effect that is not necessary for the agent’s plan to succeed in saving the fivepeople on the main track. Both the DDA and the DDEconflict with Consequentialism because these deontological principles claim that factors other than consequences matter, so it is sometimes morally wrong to dowhat has the best consequences overall.An empirical study cannot, of course, determinewhich moral theory is correct or which acts are morallywrong. That is not the purpose of our study. Our goalis only to use traditional theories, which pick out factorsthat affect many people’s moral intuitions, as tools toexplore neural systems involved in moral judgment. Thefactors used in this study play a significant role not onlyin moral philosophy but also in law (because people areusually not guilty of first-degree murder when theymerely let people die and do not intend death) andreligion (such as when the Catholic Church cites aprohibition on intended harm to justify its official positions on abortion and euthanasia). Psychological studieshave documented omission bias (Kahneman & Tversky,1982) and intention bias (Hauser, 2006) in moral judgment, substantiating the impact of action and intentionon law and religion. Other factors, such as proximity tothe victim and creation vs. deflection of a threat, mayalso affect moral judgments but were not investigatedexplicitly in this study.Based on previous studies of neural correlates ofmoral judgments (surveyed in Moll, de Oliveira-Souza,& Eslinger, 2003; Greene & Haidt, 2002), we hypothesized that: (1) the medial frontal gyrus (Brodmann’s area[BA] 10), (2) the frontopolar gyrus (BA 10), and (3) theposterior superior temporal sulcus (STS)/inferior parietal lobe (BA 39) would be more active when consideringmoral scenarios than when considering nonmoral scenarios, irrespective of consequences, action, and inten-Volume 18, Number 5

tion. Hypotheses regarding the differential effects ofconsequences, action, and intention were then framedwith respect to anatomic circuits linked to emotion andcognition. The paralimbic system, including the amygdala, cingulate cortex, hippocampal formation, temporalpole, and ventromedial prefrontal (including orbitofrontal) cortex (Mesulam, 2000), has been credited as the‘‘emotional’’ system of the brain, whereas the ‘‘centralexecutive’’ system (Baddeley, 1986), including regions ofthe parietal lobe and lateral regions of the prefrontalcortex, has been credited as the ‘‘cognitive’’ system ofthe brain (Miller & Cohen, 2001). We hypothesized thatnegative affect or emotion would be associated withviolations of established moral doctrines, and thus (4)the paralimbic system would be activated more inthinking about actions that cause harm (and thus violatethe DDA) than in thinking about similarly harmful nonactions or action omissions. We further hypothesized (5)that the paralimbic system would be more activatedwhen thinking about intending harm as a means (whichviolates the DDE) than in thinking about causing harmas an unintended side effect. Conversely, cognitiveregions of the brain involved in reasoning would beactivated relatively more in considering moral scenariosthat did not include violations of either the DDA or theDDE.Another factor that is often said to affect emotion andmoral judgment is the language used to describe ascenario. Past studies of moral judgment seem to describe moral or personal moral scenarios with morecolorful language than their nonmoral or ‘‘impersonal’’moral counterparts. (Greene Nystrom, et al., 2004;Greene, Sommerville, et al., 2001) If so, this confoundmight explain greater activations observed in emotionalsystems. To test the effects of language, we presentedeach moral scenario in both dramatic (colorful) andmuted (noncolorful) languages. We hypothesized (6)that moral scenarios presented in a colorful languagewould activate regions of the paralimbic system morethan otherwise similar moral scenarios presented in aplain language.The experimental design used to test these hypotheses required some unique features. Previous studies onmoral judgment compared moral stimuli to non-moralunpleasant stimuli (Moll, de Oliveira-Souza, Bramati,& Grafman, 2002) or semantic improprieties (Heekeren,Wartenburger, Schmidt, Schwintowski, & Villringer,2003), neither of which consistently involve nonmoralsocial processes. As a result, activations that appearin their respective moral vs. control contrasts mayrepresent general social processing rather than uniquely moral processing. Furthermore, Greene, Nystrom,et al. (2004), Heekeren et al. (2003), and Greene,Sommerville, et al. (2001) ambiguously asked their subjects to judge whether actions in their moral conditionswere ‘‘appropriate’’ or ‘‘inappropriate.’’ It is unclearhow subjects construed this request (according to theirown moral values, what society deems acceptable, orwhat is legal), making it difficult to determine whetherthe aforementioned study results really reflect the processes that underlie moral judgment in particular. Moreover, the cognitive processing required by previouscontrol conditions was only weakly matched to thatrequired by their moral conditions, again making itdifficult to determine which cognitive processes accompany moral judgment in comparison to other kinds ofsocial judgment.To avoid possible confounds of past studies, werestricted our moral scenarios to issues of killing andletting die rather than other moral topics, such as rape,theft, and lying. Our nonmoral scenarios describeddestruction of objects of personal value rather thanharm to other people. Hence, although our nonmoralscenarios involved other people (such as firefighters andclerks) and drew upon other kinds of social processing,they did not require any core moral judgments orspecifically moral processing. All variables of the factorial design were matched so that nonmoral scenarioshad the same combinations of consequence, action,intention, and language conditions as moral scenarios.Moral scenarios were then compared directly to nonmoral scenarios, rather than to a baseline or a separateless demanding cognitive condition. Instead of askingthe subjects whether it would be appropriate to perform an action, we asked ‘‘Is it wrong to (action appropriate to the scenario)?’’ and ‘‘Would you (actionappropriate to the scenario)?’’ By asking both questions,we hoped to reduce the risk that different subjectswould approach the scenarios with different questionsin mind.METHODSExperimental DesignThe factors described in the introduction were operationalized into four variables (morality, type, means, andlanguage; Table 1) and entered into a factorial design(Table 2).The morality variable had two levels: ‘‘Moral’’ scenarios described harm to people, and ‘‘nonmoral’’ scenarios described harm to objects of personal value. Thus,only moral scenarios asked for core moral judgments asdefined in the Introduction. The type variable had threelevels: ‘‘numerical consequences,’’ ‘‘action,’’ and‘‘both.’’ ‘‘Numerical consequences’’ scenarios describedan action that would harm a smaller number of people/objects and another action that would harm a largernumber of people/objects. Because it would be nonsensical to offer separate options describing two differentinactions, options in numerical consequences scenarioswere presented as positive actions. More harmful options represented violations of Consequentialism (although consequentialists take into consideration manySchaich Borg et al.805

Table 1. Experimental criptionMoralActing on people (e.g., trolleyscenario)NonmoralActing on objects andpossessionsNumericalconsequencesHarming x people/objects vs.harming y people/objects(Consequentialism)ActionHarming x people/objects vs.letting x people/objects beharmed (DDA)Both (numericalconsequences action)Harming x people/objects vs.letting y people/objects beharmed (Consequentialism DDA)MeansIntentionally using somepeople/objects as a meansto save others (DDE)NonmeansCausing unintentional butforeseen harm to people/things to save othersColorfulDescribed with more detailedimagery and dramatic wordsPlainDescribed with plain imageryand simple wordsDDA Doctrine of Doing and Allowing; DDE Doctrine of DoubleEffect.effects other than the number harmed). ‘‘Action’’ scenarios then described an action that would harm thesame number (but a different group) of people/objectsas would be harmed if the act were omitted. Actionscenarios, therefore, proposed violations of the DDA.‘‘Both’’ scenarios described an action that would harmfewer people than would be harmed if the act wereomitted, thus portraying conflicts between the DDA andConsequentialism. If numerical consequences and action had been separated into independent two-levelvariables, one of their interactions would have been acell describing two options of action that saved/killedthe same number of people. Given that all parties in thescenarios were anonymous and that all other variableswere held constant, subjects would have had to choosearbitrarily. Because we would have had no way tocontrol for the influences on such arbitrary choicesand because the motivations behind such choices wouldlikely involve nonmoral processing, we combined thenumerical consequences and action variables into thethree-level variable, ‘‘type.’’The means variable had two levels: ‘‘means’’ scenarios, which described intended harm, and ‘‘nonmeans’’scenarios, which described foreseen but unintendedharm. Means scenarios proposed violations of the DDE.The language variable also had two levels: ‘‘Colorful’’scenarios were described in dramatic language, and‘‘plain’’ scenarios were described in muted language.Our four variables together constituted a 2 (Morality) 3 (Type) 2 (Means) 2 (Language) design (Table 2).Due to timing constraints, we had two scenarios in eachof the moral factor cells (24 moral scenarios) and onescenario in each of the nonmoral factor cells (12 nonmoralscenarios).Each scenario block consisted of a series of threescreens (Figure 1). The first screen described the scenario. The second and third screens posed the questions‘‘Is it wrong to (action appropriate to the scenario)?’’and ‘‘Would you (action appropriate to the scenario)?’’,which were presented in randomized order. Subjectsread and responded to the scenarios at their own pace,pressing the right button to answer ‘‘yes’’ and the leftbutton to answer ‘‘no.’’ Each response advanced thestimulus program to the next screen. Subjects’ responses and response times to both questions wererecorded. Four runs of nine fully randomized scenarioblocks were presented with 30 sec of rest at the beginning and at the end of each run. A presentation software (http://nbs.neuro-bs.com) was used for presentingall stimuli and for recording responses and responsetimes.Subjects were informed of the provocative nature ofthe scenarios before entering the scanner. They werealso told that they would have to answer the questions‘‘Is it wrong to (action appropriate to the scenario)?’’and ‘‘Would you (action appropriate to the scenario)?’’after each scenario, so that they would understand theTable 2. Experimental DesignVariableFactorial ctionNumericalconsequencesBothActionBothMeans Nonmeans Means Nonmeans Means Nonmeans Means Nonmeans Means Nonmeans Means NonmeansLanguage P806NonmoralCPCPCPJournal of Cognitive NeuroscienceCPCPCPCPCPCPCPCPCVolum

The exploratory fMRI study reported here makes a ‘‘second cut’’ by evaluating three factors picked out by traditional moral principles that might underlie the distinction between personal and impersonal moral problems. One classic moral theory is Conseque

Related Documents:

iors. They explore the effects of no consequences and negative consequences; we examine the effects of pos itive consequences and negative consequences. They al lowed respondents only the opportunity to reprimand the salesperson or to choose the "no action" option; we add to these options the alternative of rewarding the sales

Figure 1: Intention-based human motion anticipation. Given a human motion input sequence (red-blue skeletons), our method forecasts the intention of the person ahead of time (top row) and the human motion (green-yellow skeletons) conditioned on the previous motion and the future intention.

4 Rig Veda I Praise Agni, the Chosen Mediator, the Shining One, the Minister, the summoner, who most grants ecstasy. Yajur Veda i̱ṣe tvo̱rje tv ā̍ vā̱yava̍s sthop ā̱yava̍s stha d e̱vo v a̍s savi̱tā prārpa̍yat u̱śreṣṭha̍tam āya̱

Logical consequences are not used to threaten or intimidate a child. Logical consequences should not be used if the child does not understand the options and is not able to make a decision about the action to choose. If used appropriately, logical consequences should result in rapid change

SP3 : Technologies de traitement SP4 : Outil global d'aide à la décision Action 6 PCB OPTITRI Action 7 PCB ECODEPOT Action 8 STAB PCB Action 9 PCB SEDICA Action 10 FUNGI EAT PCB Action 12 BIODECHLOR PCB Action 13 DESTHER PCB Action 14 PLATPIL PCB Action 15 SEDIRHONE PCB / / / / SP3.1 : dragage et criblage SP3.2 : confinement SP3.3 : absorption

In this fMRI experiment, subjects watched precision grips and whole-hand prehensions embedded in a drinking or an eating context. Indeed, in the right inferior frontal mirror neuron area there was higher activity for observed . ison of the ''intention'' conditions (when an action was embedded in context that cued a specific intention .

on Omnichannel E-Tailing. The Introduction of E-Commerce sites Like Flipkart and Amazon has bought a drastic change on Customers Intention and Purchase Behaviour. This paper Provides the statistical view point on Customers Intention and Purchase Decision on Omni-Channel E-Tailing. Keyword: Omni-channel, E-tailing, Internet of Things. 1.INTRODUCTION

purchasing intention via attitude towards green brands. In addition, there's a significant positive direct impact of green brand trust on consumers' green purchasing intention. Keywords: Green knowledge, Green trust, Attitude towards green brands, Green purchasing intention, Alexandria, Cairo, Aswan, Egyptian food and beverage market.