Wednesday, May 29, 2013

The price of your soul: neural evidence for the non-utilitarian representation of sacred values

ResearchBlogging.org
One common way that people reject the claim that there is a common currency in which all available options are represented is by raising the possibility that some options are ‘incommensurable’. For two things to be incommensurable in general means that there isn’t some standard of comparison which applies to both of them. So the relevance of such claims to the common currency thesis is pretty direct.

Barry Schwartz, for example, holds that “some sets of commodities are simply incomparable or incommensurable” (Schwartz 1986, p154). Schwartz has in mind, among other things, comparisons of outcomes we would regard as morally significant, such as charitable giving, and cases of more ordinary consumption.

Perhaps the most common context in which people will claim that some option (or type of option) is incommensurable in this way is with reference to options that are moral, or purportedly sacred. (The two are often supposed to coincide, so a moral view about the value of life might well be expressed as a claim about the 'sanctity' of life.)

It is clear enough what vague idea is being expressed here. We might reasonably and calmly suppose that there is some definite number of milkshakes that I would have to be given in order to forgo a piece of cake. But most people actively resent the suggestion that there is any answer to the question how many milkshakes you’d need to give them before they sell one of their children. (And they resent both the suggestion that there is a definite answer, as well as the attempt to get them to reveal their price.)

The vague idea, then, is this: Some values are of different kinds, and they don’t even have ‘prices’ in more mundane things people might want. It is, furthermore, somehow wrong - not merely incorrect - to suppose anything else.

If something like this is true, then there isn’t a single common currency. There might be two currencies – one with conventional consumption goods on it, and another with moral values. There’s reason to doubt that such a picture is actually coherent, or at least that it could be worked out in detail coherently, which is why I say that the idea is vague, even if it is clear what it is. (I’ll pick up this point in a future posting on the generic ‘anti dualist’ argument.)

One point to make here is, of course, that talk is cheap. People routinely avow their commitment to standards of conduct of which in practice they fall well short. (The majority of sworn undertakings of fidelity in the course of marriage ceremonies are probably entirely sincere at the time.) So the mere fact that many people (at least) assert that they have values that are unconditional or have no pragmatic price does not establish that anyone does.

That said, there are certainly many prima facie credible examples of people who seem to place some value or other far beyond other considerations and temptations, and upheld these values in the face of death, or under torture, etc. We could perhaps say that these people were never offered a great enough temptation, and suppose that if they had been then their allegedly unconditional values would have gone the way of many marriage vows. But we'd be speculating, and it would be fairer to admit the behavioural evidence, in at least many cases, just doesn’t settle the question either way.

It would be good to have some evidence of different kinds, including evidence about what is going on ‘under the hood’, which is to say in the brains of people purportedly responsive to values of different kinds, including putatively 'sacred' ones.

A recent paper in Philosophical Transactions of TheRoyal Society B offers just that. It’s by a set of collaborators headed by Gregory S. Berns, and purports to show ‘neural evidence for the non-utilitarian representation of sacred values’.

Here’s what they did


Subjects (32 in scanner, and an additional 11 outside the scanner) participated in an experiment separated into four phases, with neural (fMRI) data acquired in all stages:

In phase one, “participants were presented with value statements phrased in the second person, one at a time.” In this stage participants made no choices.

There were 62 pairs of statements including ones about mundane preferences (‘You are a Coke drinker’) and ones about what Berns et al call ‘sacred’ matters (‘You believe that all Jews should have been killed in WWII’).

In phase two “complementary statements were presented together, and for each pair, the participant had to choose one of the statements.”

In phase three, participants were asked hypothetically if there was a dollar amount that they would accept in order to reverse their choices from phase two.

The fourth phase was an auction about which subjects had not been told in the briefing for the preceding phases. (So that expectations about the auction did not influence earlier choices.) Here subjects “were given the opportunity to sell their answers from the active phase for real money.” (The ‘active’ phase is phase two.) Selling an answer meant signing a statement disavowing the original answer. This phase used a Becker–DeGroot–Marshak (BDM) auction mechanism, with bids ranging from $1 to $100.

A follow-up survey between 6 and 14 months after the experimental session (without scanning) attempted to assess the stability in people’s answers (in phase two), and to ask for a rationale for each answer, where the three options were: “(i) right and wrong; (ii) costs and benefits; and (iii) neither.”

In addition to this, an on-line survey with 334 subjects was used to validate aspects of the design, especially the classification of statements.

Here’s what they found


First, the ‘sacred’ values were put up for sale (hypothetically and actually) far less frequently than the mundane ones. See figure 1 (from the paper). The inset bar graph shows fraction of responses sold as a function of being deontic (identified as based on ‘right/wrong’), utilitarian (‘cost/benefit’) or neither, and shows that there was a much higher opt out (don't sell) rate for statements classified as 'deontic'.

Figure 1 of the paper.

Second, there seems to be a different neural division of labour for statements about right and wrong, and statements that about more mundane preferences. In the paper this is expressed as a distinction between ‘deontic processing’ and ‘utilitarian processing’.

The main evidence for this claim is a fairly rich set of comparisons where the ways subjects explained their choices (right/wrong vs. cost/benefit), their actual choices, as well as what they hypothetically said they might disavow for money, and what they were actually prepared to disavow for how much money, were all used in different ways to partition the neural data, which was then examined for contrasts. A number of possible confounds were identified, and attempts made to control for them. You should read the paper for the details.

The simple bottom line from the neural analysis is, as the authors put it in the first paragraph of their discussion:

“These results provide strong evidence that when individuals naturally process statements about sacred values, they use neural systems associated with evaluating rights and wrongs (TPJ) and semantic rule retrieval (VLPFC) but not systems associated with utility. The involvement of the TPJ is consistent with the conjecture that moral sentiments exist as context independent knowledge in temporal cortex. Both the left and right TPJ have been associated with belief attribution during moral judgements of third parties. Our results show that it is also involved in the evaluation of personal sacred values without decision constraints. Thus, one explanation for the reduction in morally prohibited judgements when the TPJ is disrupted by transcranial magnetic stimulation is because disruption impairs access to personal deontic knowledge.” [TPJ = left temporoparietal junction; VLPFC = ventrolateral prefrontal cortex.]

Here’s what I think


This post is a bit long already, so I’ll be brief for now. (I plan to write about some related papers on the ‘sacred values’ topic in the future.)

(1) Before I say anything critical, let me say that I think this is a good and valuable paper, that helps open up a really worthwhile line of enquiry. One way to deal with a problem of underdetermination (in this case the problem that we can’t tell from behavior whether people really have any unconditional values) is to get different kinds of data, that can help constrain how we interpret the data we do have.

That general methodological principle informs many design decisions in this paper, where a mixture of passive, active, and self-report tasks are related with behavioural measures of varying kind (hypothetical avowals, and real bids at auction) and all correlated with neural data. My brief overview attempts to describe the gist, but the details are well worth working through.

(2) That said, I find some of the ways claims are expressed in the paper unfortunate. Utilitarianism is a theory – or a family of theories – about right and wrong, and so supposing a dichotomy between utilitarian and right/wrong processing seems confused. In addition, deontological and consequentialist theories are not primarily descriptive theories about how people think, but normative ones about how they should behave. Both types of theory are clearly committed to some views about how it might be possible for people to act in accordance with those theories, but they are not about cognitive processes. Most of the formulations of claims about ‘deontic’ and ‘utilitarian’ processing in the paper can be taken as elliptical for more careful statements that aren’t overtly problematic. But it’s still annoying to see the terminology used so loosely, and also to see worked out and appropriate distinctions (such as the notion of 'lexical preferences') not being deployed.

(3) Competing values need to … well … compete. It’s all very well suggesting that some values are unconditional, but if they motivate action then it seems as though they need to show up in the same place other motivations do, and come with some motivational 'force' that is alike in kind even if different in degree from whatever temptations bring with them. Reason, Hume said, must be the slave of the passions. The sacred, we might analogously say, must do so too, if it wants to be motivating.

George Ainslie quotes the same claim about incommensurability from Barry Schwartz that I quote above, and goes on to say:
But if behaviors are not selected according to a single standard of choosability, the standard summarized by the term “reward” […], how are they selected? The organism’s means for expression are limited. A single channel of attention, if not a single set of muscles, is needed for the variety of behaviors that physically can be substituted for one another. Assuming that the selection of these behaviors is determinate, there must be a means of comparing them along a common dimension (Ainslie 1992, 31).
I'm not saying that there is a decisive and general argument in favour of a common currency that automatically applies to putatively sacred values. (I don't think there is a general argument in favour of a common currency, but - in case you were wondering - that it's a contingent fact that humans approximately have one some of the time.) My point is simply that there is an argument worth taking seriously here, and that it would be welcome to see those discussing sacred values taking it more seriously.

(As a last note on this point, there's growing evidence that choices across a wide range of different modalities have a common neural value representation in humans. For more, see immediately below.)

(4) Here are some experiments that I’d like to see in the future (and may well be in the pipeline)

(a) What happens when ‘sacred’ values are in conflict with each other? People manifestly do end up in this condition. On the one hand it is the basis of some of our most compelling literature (consider Antigone’s position, between sacred duty to her dead brother Polyneices and similarly sacred duty to obey Creon, the king of Thebes). On the other, we have various kinds of trained specialist, such as triage nurses, whose job requires trading off competing moral (maybe 'sacred') values.

(b) What neuroeconomic sense can be made of this data, or the same phenomene in a more specifically neuroeconomic setting? It would be very interesting to know more about how conflict between ‘sacred’ values is neurally represented. And also to know how (if at all) are values associated with ‘sacred’ options are represented in relation to the long and growing list of rewards in various modalities that do seem to have a common basis for representation. See this (soon to be expanded) evidence rack.

That’s all I’ve got time for now.

Full text of the abstract of the paper:

Sacred values, such as those associated with religious or ethnic identity, underlie many important individual and group decisions in life, and individuals typically resist attempts to trade off their sacred values in exchange for material benefits. Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests that they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We used an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Related posts (some forthcoming)

The generic ‘anti-dualist’ argument

References

Ainslie, G. 1992. Picoeconomics, Cambridge: Cambridge University Press.

Schwartz, B. 1986. The Battle for Human Nature: Science, Morality and
Modern Life. New York: Norton.

Berns, G., Bell, E., Capra, C., Prietula, M., Moore, S., Anderson, B., Ginges, J., & Atran, S. (2012). The price of your soul: neural evidence for the non-utilitarian representation of sacred values Philosophical Transactions of the Royal Society B: Biological Sciences, 367 (1589), 754-762 DOI: 10.1098/rstb.2011.0262

No comments:

Post a Comment