Friday, February 20, 2015

Intragenomic conflict and Intrapersonal Conflict

Here is the abstract of a working paper that is part of my Common Currency project. This one has dragged on for some time. I’m not sure when I first gave a talk on the topic, but it was probably late in 2012, and I’ve peddled this stuff at various venues between now and then. Along the way I’ve read loads of interesting biology, and learned a lot. I’ve also changed my mind a few times over the details, but seem to have stabilized on a view that I think I can defend. I hope to have a full draft paper that is fit to be posted on this site before the middle of 2015.

Intragenomic conflict and Intrapersonal Conflict

David Spurrett (UKZN)

Abstract


It is now recognised that different parts of the genome of a single individual, especially autosomal genes inherited from a male parent and from a female parent, can sometimes have conflicting interests. One mechanism allowing these conflicts to be expressed in some species, including mammals, is genomic imprinting, which modulates the level of expression of some genes depending on parent of origin. Several leading biologists, including William Hamilton and Robert Trivers, but particularly David Haig, have suggested that this intragenomic conflict may explain, or predict, some kinds of intrapersonal motivational conflict in humans. Here I seek to assess this suggestion, especially as developed by Haig (2006).

There are two (potentially) complementary ways in which genomic conflict might be related to motivational conflict. One concerns pattern in behaviour, and the other concerns the processes, or mechanisms, by which behaviours are selected. (This corresponds roughly to the distinction between ultimate and proximal explanation.) A conflicted pattern in behaviour won’t be consistent with a single preference ordering, whereas a conflicted process of behaviour selection will be in some way constitutionally disunified, or fractious. In the first case I argue that the phenomenon of intragenomic conflict has at most the consequence that pattern in behaviour won’t correspond in any simple way to the collective interests of the genes as understood from a perspective neglecting genomic conflict. This just isn’t the same thing as being inconsistent with some preference ordering. The failure of an inference from genomic conflict to individual behavioural inconsistency, however, leaves open the possibility that genomic conflict is expressed in the behaviour selection process.

The case of mechanisms is more complicated because of the large variety of available models of the behaviour selection process. I review a number of leading proposals, and argue in each case that intragenomic conflict either does not predict conflict over behaviour selection, or would at most modulate the conflict already predicted by the model. Considered in relation to existing psychological models, then, it seems as though genomic conflict does not predict conflict. Finally, I develop a suggestion hinted at in Haig, and argue that there are indeed coherent scenarios in which conflicting genes could influence behaviour, on the model of mind-controlling parasites rather than by inputs to an established behaviour selection system. Whether any conflicting genes in fact operate in these ways is, of course, an empirical matter.

References

Haig, D. (2006) Intrapersonal conflict. Pages 8-22, in M.K. Jones and Fabian (eds.) Conflict. Cambridge University Press, Cambridge.


Table of contents


1. Introduction

2. Intrapersonal Conflict?

3. Intragenomic conflict and genomic imprinting

3.1. Intragenomic conflict

3.2. Imprinting and PSGE.

4. Haig on intra-personal conflict

4.1. An adaptive rationale for inconsistency?

4.2. A mechanism for sub-personal conflict?

{4.3. Conditional strategies and mind-controlling parasites}

{5. Objections and clarifications}

5.1. What about Badcock and Crespi?

5.2. Behaviour isn’t special, but consistency is.

5.3. What about Haig’s remarks on common currencies?

5.4 Who cares?

6. Conclusion


{Curly brackets denote a section that might not make it into the final paper.}


Wednesday, February 4, 2015

Draft: The Natural History of Desire

I previously posted the abstract and conference slides of this working paper, presented a few weeks ago at the annual conference of the Philosophical Society of Southern Africa, held in Port Elizabeth this year. What appears below is a working draft of the paper, edited down to the word limit for submission to the conference volume.

The Natural History of Desire

David Spurrett – UKZN – spurrett@ukzn.ac.za

Abstract

In Thought in a Hostile World (2003) Kim Sterelny develops an idealised natural history of folk-psychological kinds. He argues that under certain selection pressures belief-like states are a natural elaboration of simpler control systems which he calls detection systems, and which map directly from environmental cue to response. Belief-like states are distinguished by the properties of robust tracking (being occasioned by a wider range of environmental states, including distal ones), and response breadth (being able to feature in the triggering of a wider range of behaviours). A key driver, according to Sterelny, of the development of robust tracking and response-breadth, and hence belief-like states, are properties of the informational environment. A transparent environment is one where the functional relevance to an organism of states of the world is directly detectable. In a translucent or opaque environment, on the other hand, states significant to an organism map in less direct or simple ways onto states that the organism can detect. A hostile environment, finally, is one where the specific explanation of translucency or opacity is the design and behaviour of competing organisms. Where the costs of implementing belief-like states pay their way in more discriminating behaviour allocation under conditions of opacity and hostility, Sterelny argues, selection can favour the development of decoupled representations of the environment.

In the case of desires, however, Sterelny maintains that the same arguments do not generalise. One justification that he offers for this view reasons that unlike the external environment, the internal processes of an organism are under significant selection pressure for transparency. Parts of a single organism, having coinciding interests, have nothing to gain from deceiving one another, and much to gain from accurate signalling of their states and needs. Key conditions favouring the development of belief-like states are therefore absent in the case of desires. Here I argue that Sterelny’s reasons for saying that his treatment of belief doesn’t generalise to motivation (desires, or preferences) are insufficient. There are limits to the transparency that internal environments can achieve. Even if there were not, tracking the motivational salience of external states calls for pervasive attention to valuation in any system in which selection has driven the production of belief-like states.



1. Introduction

In his (2003)Thought in a Hostile World Sterelny develops a detailed articulation of an approach suggested in Godfrey-Smith (1996, 2002). Godfrey-Smith’s proposal, the Environmental Complexity Hypothesis (ECH), maintains that “the function of cognition is to enable the agent to deal with environmental complexity.” That is, as an explanatory hypothesis, the ECH proposes that the capacity which cognition gives organisms having it, and which is responsible for its success under natural selection, is responding more effectively to heterogeneity in their environments. For present purposes I simply accept the ECH. I happen to support it as a fruitful research programme, but won’t offer a defence. (For broadly supportive lines of critical comment on Sterelny 2003 see Papineau 2004, and for criticism on points of detail see Christensen 2010.) My aim, rather, is to develop a line of thinking internal to the ECH, and which concerns the specific treatment of motivational states in Sterelny’s (2003) book.

Sterelny describes his objective as combining two integrative projects that arise when humans are studied from a naturalistic perspective. One ‘internal’ project concerns the “wiring and connection facts” about human cognitive architecture, and aims to assemble a “coherent theory of human agency and human evolutionary history”. The other ‘external’ project attempts to relate the conclusions of the first project to the ways some social sciences (including psychology, anthropology, and economics) have produced “refined versions of our folk self-conception”, where that self-conception is that we are intentional beings. Complementing the “wiring and connection facts” these projects focused on intentional action have produced the “interpretation facts” (Sterelny 2003, 3-5).

Sterelny frames his own proposal against the backdrop of a position he calls the “Simple Co-ordination Thesis” (SCT). According to adherents of the SCT, which comes in various forms:

“… (a) Our interpretative concepts constitute something like a theory of human cognitive organization: they are a putative description of the wiring-and-connection facts; (b) Our interpretative skills depend on this theory, and our ability to deploy it on particular occasions; (c) We are often able to successfully explain or anticipate behaviour because this theory is largely true.” (Sterelny 2003, 6).

In the first part of Thought in a Hostile World, entitled ‘Assembling Intentionality’, Sterelny argues, in effect, that the Simple Coordination Thesis is approximately correct. He rejects the eliminativist view that belief and desire talk is false, and also the Dennettian attributionist view that the interpretation facts ‘do not have the function of describing the internal organization of agents’ (Dennett 1987; Sterelny 2003, 7). There are, Sterelny argues, internal ‘belief-like’ states that have features approximately like those expected by the SCT. They are elaborations of simpler systems, and it is unclear to what extent they are found in animals other than humans, but some other primates plausibly have them. In the case of preferences, Sterelny maintains that the SCT is less approximately correct, and ‘desire-like’ states incompletely found even in humans.

My critical concern is specifically with Sterelny’s treatment of desire. Sterelny argues that there are important functional dis-analogies between belief-like states and desire-like states, so that the considerations that explain why selection could in some cases favour the kinds of cognitive elaboration culminating in belief-like states largely don’t apply to motivation. In his view simpler control systems can achieve much more there, and so there’s even less evidence that desire-like states are found in non-human animals. I begin with Sterelny’s account of belief.




2. Sterelny on the descent of belief

Sterelny develops an evolutionary history of beliefs starting with the ‘detection system,’ an idealised and very simple control system that falls well short of belief. A detection system’ mediates “a specific adaptive response to some feature (or features) of [an organism’s] environment by registering a specific environmental signal” (Sterelny 2003, p14). One of Sterelny’s examples is the cockroach flight response, which triggers running away from gusts of air, registered by hair cells on their heads  (Sterelny 2003, p14). The idea is that the creature has a specific behavioural response (running away in this case) to a single environmental cue (the moving air caused by a striking toad, or magazine-wielding human). It seems clear enough that it could sometimes be a satisfactory solution to a control problem to have a behaviour triggered by a single cue.

It isn’t clear whether any specific organism is actually supposed to instantiate a detection system strictly understood. At least some of Sterelny’s examples are of animals whose flexibility in either detection or response is greater than a detection system as described would allow.[1] He also suggests that detection systems can be acquired by simple associative learning.[2] For present purposes these worries can be set aside. The notion of a detection system is a useful idealisation even if there are no confirmed pure examples (Godfrey Smith 2014, Chapter 2). Sterelny puts the notion of a detection system to work by thinking about the costs and benefits of such simple mechanisms, and possible forms of incremental modification that might lead to more discriminating control. 

The most obvious benefit of detection systems is that they’re relatively simple, and so cheap to build and run. As Sterelny points out, though, organisms with cue-driven behaviours can be vulnerable to exploitation. Fireflies which approach species-typical flash sequences to locate mates are lured by predators generating the same sequences to attract meals (Sterelny 2003, p15). Ants using the absence of chemical signals to distinguish (and attack) invaders are exploited by parasitic beetles mimicking the signals, and food-eliciting gestures (Sterelny 2003, p15).

Sterelny refers to the general condition in which environmental signals that an organism can detect are reliably good occasions for specific responses it is capable of producing, that is where cue-driven behaviour will be successful, as informationally transparent (Sterelny 2003, p20). A transparent environment is “characterized by simple and reliable correspondences between sensory cues and functional properties.” A key insight that he develops in his (2003) is that not all environments have this property. than In cases where relevant features of the environment “map in complex, one to many ways onto the cues [an organism] can detect” then it occupies an informationally translucent environment (Sterelny 2003, p21). In some cases the translucency is not the result of brute heterogeneity in the surrounding world, but is produced by other living things with an interest in misleading (for example by being camouflaged to fool predators or prey) in which case Sterelny calls the environment informationally hostile.

No general prediction follows from either transparency or hostility. But we can, Sterelny argues, make the conditional prediction that where the gains from more discriminating control outweigh the cost, then selection will favour certain kinds of elaboration of detection systems, if means are available. He discusses two particular kinds of elaboration - robust tracking, and response breadth.

Robust tracking is elaboration on the ‘input’ side. Where a detection system triggers a behaviour in response to a single cue, robust tracking links response to multiple, integrated cues. This can allow tracking of some environmental states under conditions of translucency or hostility. Reed-warblers are exploited by cuckoos, and face a serious problem distinguishing parasitic eggs from their own. Sterelny suggests that the muti-modal discrimination they draw on determining whether to reject an egg, including sensitivity to size, colour, shape, timing of appearance, and whether a cuckoo has recently been sighted near the nest, is an example of robust tracking (Sterelny 2003, p27-29).

Response breadth, on the other hand, is elaboration on the ‘output’ side, and occurs when more than one behaviour might be produced in response to the same registered contingency. One of Sterelny’s illustrations concerns responses to predators. Having registered the presence of a predator, an organism with response breadth might make one of a variety of responses including immediate flight, approach, or continuing with heightened vigilance, perhaps depending on the state of the organism itself (Sterelny 2003, p33-40).[3]

When robust tracking and response breadth are combined, we get what Sterelny calls ‘decoupled representation’. Now behaviour can be partly contingent on relatively high level patterns of environmental information, perhaps integrated over time scales reaching beyond the present and sensitive to the state of the behaving organism. Decoupled representations are “internal states that track aspects of our world, but which do not have the function of controlling particular behaviors” (Sterelny 2003, p39). Such sophisticated states are, as found in humans at least, worth calling ‘belief-like states’. These are genuine cognitive states which, while they may not share all of the features associated with any particular version of the Simple Co-ordination Thesis, are close enough that Sterelny’s position regarding the interpretation facts in humans is neither eliminativist nor Dennettian attributionist.

3. Sterelny on the Descent of Preference

Sterelny devotes less attention to desire, or preference, than to belief. Three chapters of his (2003) focus mostly on the natural history of belief-like states, followed by a single chapter on the descent of preference. Although there is some overlap, the comparative brevity of the treatment of motivation is not because it is continuous with the account of belief. In fact, a major order of business in the motivation chapter is to argue that the account previously developed for belief cannot be generalised to motivation. The conclusion Sterelny eventually draws is, furthermore, more friendly to a kind of eliminativism. Although he finds a plausible rationale for the evolutionary development of belief-like states, he says that he does not “think that there is even a rough mapping between preferences identified in our interpretive frameworks, and states of the internal cognitive architecture that controls human action” (Sterelny 2003, p87).

This is the conclusion that I wish to reject. I do so here by undermining Sterelny’s argument that the belief treatment doesn’t generalise to motivation. I therefore need to lay out Sterelny’s reasoning in more detail than in the belief case. The criticism I offer here is rather restricted, and negative. I won’t develop other lines of criticism of Sterelny’s account of motivation, and will only be able to hint at an alternative positive view, sharing more than Sterelny envisages with his account of belief.

3.1 Sterelny’s explanatory target

In the case of beliefs, Sterelny’s explanatory target is relatively close to a standard (teleosemantic) conception. Belief-like states, as described, have representational content. They have satisfaction conditions and can be more or less supported by, and responsive to, environmental information. It is less clear that we are on such familiar ground regarding desires. Sterelny mostly doesn’t refer to ‘desire like states’, and favours the term ‘preference’ for the motivational component of folk-psychological explanation. He describes his explanatory target as “motivation based on representations of the external world” (2003, p79). He seems, furthermore, to endorse the distinction drawn by Tony Dickinson between ‘habit based’ and ‘intentional’ agents, where intentional agents are sensitive to the value of actions, including values not explicitly cued by occurrent external information, and to the causal connections between acts and their consequences (e.g. Dickinson and Balleine 2000; Sterelny 2003, p82).[4]

Sterelny thus associates ‘preferences’, with means-end reasoning, suggesting that the “most incontrovertible cases” of the applicability of belief and preference psychology are “in complex calculating games like bridge and chess” (Sterelny 2003, p95). Such very cognitive and deliberative motivational systems are, says Sterelny, to be distinguished from motivation by drives, or feeling. Drives, in his view, signal departures from homeostasis and in at least some cases motivate directly by feeling (he does not say that all drive based control involves feeling). He also maintains that drives can solve a wide range of control problems, and consequently that it is less clear that there is a job for preferences to do. As he poses the problem “… what selective payoff could there be through routing action (say) through preferences about drinks rather than through sensations of thirst?” (Sterelny 2003, p81).

Various commentators have expressed dissatisfaction with how Sterelny opposes desire and preference based motivation here (e.g. Schulz 2013, Papineau 2004). I’ll return to some of the difficulties with it in due course. For now, we need to be clear that Sterelny maintains that the motivational counterpart to beliefs is motivation focussed on representations of goal states, involving means-end reasoning, and that for preferences to be predicted, they need to do better (given the costs of implementing them) than motivation by drives signalling departures from homeostasis.

3.2 Why the belief case won’t generalise

It might seem as though the considerations favouring the development of belief-like states would also explain the construction of motivational systems. Detection systems are ‘pushmi-pullyu’ (Millikan 1995) control solutions that yoke an indicative aspect (the environmental information to which each is sensitive) to an imperative one (the activity that each triggers).  Decoupled representations replace these simple mappings with the more discriminating responses to environmental information that Sterelny calls robust tracking and replace single imperative output with response breadth — behaviour drawn from a wider repertoire of possibly relevant activities. The latter elaboration, specifically, might seem by itself to create work for motivational states, to prioritise among the resulting repertoire of activities. This is the very inference that Sterelny wishes to block. The conditional argument from translucency or opacity to belief-like states cannot, he says, be generalised to give an account of “motivation based on representations of the external world.” (Sterelny 2003, p79.)

The main reason Sterelny offers for this is that the departures from transparency that explain the existence of belief-like-states, are absent from internal environments. This is not, furthermore, accidental: Since internal environments have homogenous evolutionary interests, in the sense that all parts of an organism are - so to speak - on the same team, they both lack hostility, and will be under selection pressure for transparency. This means that signals of biological needs will tend to be trustworthy. “The natural physiological side-effects of departures from homeostasis”, says Sterelny, “have the potential to be recruited as signals for response mechanisms. Over time, we would expect these signals to be modified to become cleaner and less noisy; and internal monitoring systems to become more efficient in picking them up and using them to drive appropriate responses” (Sterelny 2003, p80).

Drives might, of course, be simultaneously triggered in incompatible ways, but Sterelny maintains that one fairly robust solution to the control problem this poses is to have a “built in motivational hierarchy” (2003, p81). He doesn’t really flesh this proposal out very much, but the idea seems to be that a relatively fixed ranking of drives can determine behavioural priorities in ways that depend neither on representations of the values of outcomes, nor the connections between actions and their consequences. He refers approvingly to Rodney Brooks (1991) here, encouraging the suggestion that this fixed motivational hierarchy might depend to a significant extent on low bandwidth trumping relationships between drives. (In the early 1990s Brooks argued that ‘intelligent’ systems could be based on ‘subsumption architectures’ which were, roughly, hierarchies of detection systems operating without significant representational resources.)[5]


3.3 Sterelny’s positive view

While Sterelny holds that many organisms solve motivational problems without “representing their needs”, relying instead on a “built-in motivational hierarchy” which ranks various drives mostly based on transparent internal signals, sometimes supplemented with a little external information, this is not true of all of them. The advantages of preferences over drives, according to Sterelny, include that they liberate motivation from ‘immediate affective reward’, that they allow more efficient decision making in cases where the range of available behaviours is large, that they allow a creature to have a smaller number of motivational states, that they permit motivational conflict resolution by means other than ‘winner take all’, and that preference based systems are able to cope with changing needs, including needs that are phylogenetically novel (Sterelny 2003, pp 92-95, See also Schulz 2013, p598). Preferences, as well as representing goal states, can be ranked, and they can be learned.

Sterelny’s exposition is fairly cryptic on these points, and some commentators have expressed the view that the supposed advantages of preferences aren’t explained clearly enough, or that the contrast with what drive-based motivation could achieve is insufficiently motivated (again, see Papineau 2004). Certainly, Sterelny says very little to justify the claims that the number of motivational states would be smaller for drives than for preferences, or that drive based motivation would have to resolve conflict on a winner take all basis. Nonetheless he maintains that a small sub-set of species (hominids definitely, and maybe some others) are capable of more richly intentional and preference-based action, that is aimed at planning activities to achieve desired states of the external world. This, he thinks, does require some representation of value. But this transformation “is very unlikely to be complete” (Sterelny 2003, p95). Sterelny maintains that much other behaviour allocation is likely based on fairly quick and dirty procedures, and various kinds of distributed and non-representational control processes (as noted above, he explicitly and approvingly cites Brooks 1991).

The view that he reaches is, therefore, still clearly a version of the environmental complexity hypothesis (ECH), insofar as the development of preferences is a response to external complexity. But it is rather more friendly to eliminativism than his position in the case of belief, because the (alleged) transparency of internal environments means that there is much less complexity for cognition to ‘deal with’.



4. Criticisms of Sterelny

Sterelny, then, argues that internal environments will tend to be transparent, and that because of this, the inference from informational complexity to belief-like states does not generalise to the motivation. His argument identifies preferences with means-end reasoning.

In what follows I criticise all three of these commitments. First, I undermine Sterelny’s claim regarding the transparency of internal environments. I argue, in section (4.1) below, that his defence of the claim is insufficient, and consequently that internal environments can also favour robust tracking.

Second, I argue that even if internal environments were transparent, it would not follow that cue-based control processes would be generally sufficient. I call the condition in which cue-based control is sufficient ‘motivational transparency’, analogous to informational transparency. In section (4.2) I argue that motivational transparency does not generally follow from internal transparency. 

Finally I maintain that Sterelny has made an unsatisfactory choice of explanatory target in his discussion of preference. I argue in section (4.3) that rather than focusing on means-end reasoning, or represented goal states, what is needed, and prior to either, is a notion of incentive values, attached to occurrent environmental information and possible actions.

The lines of critical thinking offered here are not exhaustive, and the arguments I provide are brief. I aim to highlight some difficulties with what Sterelny himself identifies as ‘tentative’ moves in the area, with the aim of advancing the same general project. I won’t have space to develop or defend a positive view distinct from Sterelny’s, although some hints will emerge.


4.1 Limits on Internal Transparency

As explained above, Sterelny maintains that internal environments, having homogenous interests, will be devoid of hostility, and so be under pressure to develop accurate and transparent signals of biological states and needs. The interests within an organism are presumably homogenous (Sterelny does not spell this out explicitly) because all parts of a single organism are in some sense equal shareholders in whatever reproductive success the individual organism enjoys.

This suggestion plausibly applies, subject to cost constraints, to internal signals, to the extent that internal interests coincide. This they mostly do.[6] The relative absence of hostility does indeed imply the internal absence of one source of pollution in the external informational environment.

But hostility is not the only source of departures from transparency. As Sterelny says, an environment is informationally translucent when states that matter to it “map in complex, one to many ways onto the cues they can detect” (Sterelny 2003, p21). These conditions can, and do, arise in hostility-free internal environments, in a number of different ways. Here I identify three considerations:

Limits on transduction: Not all internal states have unique signatures that cost-effective transducers can specialise in detecting. Here are a few examples in humans. Non-nutritive sweeteners trigger transducers whose proper function is to respond to sugars that can be digested. The responses of salt receptors, depending on ion channels, are also sensitive to the ambient sodium concentration in the organism, so the resulting neural signals can be highly ambiguous (e.g. Bertino, Beauchamp & Engelman 1982) Thermoreceptors don’t come in a single type detecting ‘objective temperature.’ Instead information about temperature depends on combinations of receptors for cold and heat, as well as additional nociceptive receptors for extremes of each (for a philosophically rich discussion of peripheral thermoreception see Akins 1996).

Complex mappings: Motivationally relevant states can also depend on multiple cues. Information about temperature in humans, to continue with that example, is drawn from multiple receptors of different types that are distributed non-uniformly across the surface of the body. As Akins notes, even on the face the ratio of cold to warm receptors varies from about 8:1 on the nose, 4:1 on the cheeks and chin, while the lips have almost no cold receptors (Akins 1996, 346). Any ‘net’ signal that might drive behaviour will require these signals to be integrated in some way. More generally, internal states can span multiple organs and tissue types, with varying speeds of signalling, and latencies in responding to actions that affect them.

Cost versus accuracy tradeoffs: There are costs to improvements in tracking, just as in the external case. Simply adding internal transducers increases information load, along with metabolic and other costs building and running the receptors. Psychophysical processes generally don’t try, but rather compress transduced variation into a baseline-dependent encoding, where the baseline itself is variable (Barlow, 1961). In addition, the further a body is from being a dimensionless point, the more internal signals will tend, sometimes, to be distal or delayed, and subject to the typical error types that arise from distal signals (such as false positives from stimulation on any ‘labelled line’ channel between transducer and brain).

We should conclude that even though internal environments aren’t generally hostile, they can certainly be translucent. And translucency favours robust tracking. So Sterelny’s premise is at least not straightforwardly or generally true. What about the inference that he draws from it?

4.2 Internal transparency doesn’t imply Motivational Transparency

In the previous section I argued that there could be benefits to robust tracking in internal environments because (just as with external ones) there are limits to transparency. Since Sterelny argues from internal transparency to the non-generalisability of his treatment of belief, this is a problem for his position. But it is not the only one. To see this, let us assume that internal environments are fully transparent, in the sense that the precise level of deviation from homeostasis of all relevant internal variables are signalled in a consistently high fidelity way. Even then, cue-bound control can be inefficient.

One reason for this is that needs can have multiple satisfiers. A cold animal might be able to make itself warmer, inter alia, by shivering, by huddling with conspecifics, or relocating to a warmer spot. A dehydrated animal can drink, or it can eat, since almost all food contains some water. And so forth. A hungry animal might have  more than one foraging option. Accurate information about needs does not always, then, suggest a unique ‘good enough’ response which would favour cue-bound control.

A further reason is that actions typically have multidimensional costs and benefits. As noted most eating rehydrates as well as nourishing. Different opportunities to eat, or drink, have their own costs in energy, time, extent of competition for the same resource, etc., and their own risks including predation en route, or at the site itself, as well as payoffs in quality and quantity of the resource itself. Costs and benefits can have sharply varying fitness implications - being a little tired or hungry quite frequently is nowhere near as bad as being eaten even once.  Even if an animal had accurate information about all of these contingencies, it would not generally be obvious what course of action was appropriate or efficient. (We already know that it can be difficult to work out what to do in games of perfect information such as chess.) And animals mostly don’t have most of this information, which favours - for at least some of them - being able to sample the environment and be sensitive to the returns from various policies.

Sterelny is aware of these considerations, but does not regard them as favouring the development of preferences. An important part of the reason for this is his view that many animals can deal with the problem of trading off different courses of action by means of a “built-in motivational hierarchy” (2003, p81). This seems likely to be correct, up to a point. But it is not a reason for thinking that the arguments (conditionally) favouring decoupled representation don’t generalise to motivational states. As with detection systems, and belief-like states, we should consider the relative strengths and weaknesses of more or less quick and dirty, or inflexible, procedures.

A fixed hierarchy can probably produce quick, and good enough, responses in a wide range of situations. But such brittle solutions have the very problems of inefficiency under conditions of informational translucency that Sterelny explained when focusing his attention on beliefs including vulnerability to exploitation (See section 2 above). And if the gains in efficiency from a less rigid approach outweigh the costs, then something other than a fixed hierarchy might pay its way. What this something else might be, I suggest, is relatively general (across actions and environmental states) sensitivity to reward. Then the specific profile of things found rewarding can be set by processes of natural selection, and the organism’s behavioural dispositions partly shaped by experience of action-reward relationships. If we combine the argument of the preceding section and this one, we see a case for the robust tracking of motivationally relevant states, and a role for motivation in prioritising actions given response breadth. What motivation should do, what it is for, is prioritising on the basis of the returns from actions.

What I’m describing, though, sounds rather different from what Sterelny sets up as his explanatory target. This is deliberate, and in the following sub-section I attempt to justify it.

4.3 The wrong target

As noted above (section 3.1) Sterelny takes the target for an account of the motivational part of folk psychology to be representations of goal states, and capacity for means end reasoning selecting actions that bring the goal states closer. Recall, though, how he describes his overall project, as relating the “wiring and connection facts” about human cognitive architecture to the “interpretation facts” which are the elaborations of our folk self-conception as intentional agents. This means relating wiring and connection facts to beliefs, on the one hand, and desires on the other. Or, perhaps, desires as elaborated or regimented by some science (or group of them) that takes the folk conception as a starting point. So, we can ask, what is the approximate functional content of the folk notion of desire, or an appropriate scientific regimentation of it, for relating these two kinds of fact?

I propose that the core of the folk conception is a fairly general (and sometimes imprecise) notion of motivational strength. An intentional agent desires different goals, or to perform different actions, to varying degrees. When two mutually exclusive actions are available, it does the one that it wants the most.

Such a general notion is compatible with some of the leading philosophical accounts of desire, even though the field is contested.[7] A leading contender is the view that desires are dispositions to action, given beliefs (e.g. Smith 1987). Teleosemantic theories of desire are dispositional, and also offer an analysis of the biological function of desires (e.g. Millikan 1984, Papineau 1987). One competing approach is provided by theories of desire based on pleasure, for example Morillo (1990) which in addition identifies the dopamine system in the brain as the basis of pleasure. Details of this view are disputed by Schroeder (2004) who associates desire with learning, and identifies the dopamine system with reinforcement learning, rather than pleasure. (There are also theories of desire less obviously friendly to naturalists, such as broadly Socratic ones connecting desire to judgements about what is good.) Without joining those disputes, I note that the disposition, pleasure and learning accounts all have weaker commitments than Sterelny’s target (‘representations of the external world’, or means-end reasoning).[8] Where these theories require representations of the world, those are provided by beliefs. What desires do is relate world-states, whether represented or cued by occurrent experience, to tendencies to actions. As Papineau puts it, beliefs should be thought of as having “no effects to call their own”, but then which effects are produced depends on the motivational states (Papineau 2004, 494).

Matters do not change substantially if we shift focus to consider scientific theories as another source of what Sterelny calls “refined versions of our folk self-conception”. In behavioural psychology and economics (which are among the leading scientific regimentations of something that might be related to the folk concept of desire) the key notions are reward or reinforcement or utility, which are considered to provide an ordering of desirability for states and actions (See Spurrett 2014). In addition, in behavioural neuroscience the drive based theories that Sterelny seems to favour for organisms in which preferences (as he understands them) have no developed have largely been displaced by incentive based approaches.[9] Here too, the theories have more modest commitments than Sterelny. What preferences represent are rewards (or reward expectancies), not states of the external world.

None of this is to say that means-end reasoning is neither interesting nor important. When it occurs it plausibly stands in need of an evolutionary rationale. From the perspective suggested here, though, means-end reasoning is primarily a representational achievement, consisting in the capacity to simulate transitions between world-states, including transitions occasioned by actions, and evaluate them using the same general sensitivity to incentives as apply in ‘on-line’ experience (see Shea et al, 2008). Sterelny is probably correct, furthermore, that means-end reasoning is relatively incompletely developed even in humans, and only found marginally in relatively few nonhuman animals.

5. Conclusion

According to the Environmental Complexity Hypothesis (ECH) “the function of cognition is to enable the agent to deal with environmental complexity” Godfrey-Smith (1996, 2002). Sterelny (2003) develops an account of folk psychology within the general terms of the ECH. He argues that belief-like states can be explained as a response to failures of environmental transparency, combining robust tracking (sensitivity to multiple types of detectable information) and response breadth (the relevance of registered states to more than one behaviour). But, he argues, in the case of motivation internal environments will tend to be transparent, and because of this the inference from translucency to (an approximation of) a folk-psychological kind doesn’t apply. Preferences, understood as capacity for means-end reasoning about representations of the external world, aren’t predicted, at least for most organisms, because transparent signals of internal state plus a built-in hierarchy of drives are a pretty good way of prioritising actions.

I have accepted the Environmental Complexity Hypothesis, and broadly support Sterelny’s treatment of belief-like states, but argued against significant parts of his treatments of desire-like states. Internal environments are not as transparent as he thinks, with the result that there is work for robust tracking there too. Motivational transparency does not follow from informational transparency either, and so there is work for relatively generalised sensitivity to reward. The view of preference that is predicted here is different from what Sterelny sets out to find, but I’ve also argued that means-end reasoning isn’t the most important feature of desire. Preferences are representations of a sort, but they represent the returns (experienced or anticipated) from experienced states, or from possible actions.[10] Most of the burden of argument has been on the negative project of blocking Sterelny’s ‘no generalisation’ argument. The fuller development of the positive picture suggested here is a task for another occasion.



References
  
Akins, K. 1996. Of sensory systems and the ‘aboutness’ of mental states. Journal of Philosophy, 93(7), pp337-372.

Barlow, H. B. 1961. The coding of sensory messages. In: Thorpe and Zangwill (eds.), Current Problems in Animal Behaviour. New York: Cambridge University Press , pp. 330-360.

Berridge, K.C. 2004. Motivation concepts in behavioral neuroscience, Physiology and Behavior, 81, 179-209.

Bertino , M., Beauchamp , G. K. , and Engelman, K. 1982. Long-term reduction in dietary sodium alters the taste of salt. American Journal of Clinical Nutrition, 36: 1134-1144.

Burt, A. & Trivers, R. 2006. Genes in Conflict: The biology of selfish genetic elements, Cambridge, Mass.: Bellknap/Harvard University Press.

Dickinson, A., & Balleine, B. 2000. Causal cognition and goal-directed action. In C. Heyes & L. Huber (Eds.), The evolution of cognition (pp. 185–204). Cambridge, MA: MIT Press.

Christensen, W. 2010. The Decoupled Representation Theory of the Evolution of Cognition: A Critical Assessment. British Journal for the Philosophy of Science, 61: 361-405.

Godfrey-Smith, P. 1996. Complexity and the Function of Mind in Nature, Cambridge: Cambridge University Press.

Godfrey-Smith, P. 2002. Environmental Complexity and the Evolution of Cognition. In R. Sternberg and J. Kaufman (eds.) The Evolution of Intelligence. Mahwah: Lawrence Erlbaum, pp. 233-249.

Godfrey-Smith, P. 2014. Philosophy of Biology, Princeton: Princeton University Press.

Haig, D. (2002) Genomic Imprinting and Kinship, New Brunswick: Rutgers University Press.

Libersat, F. 1994. The dorsal giant interneurons mediate evasive behavior in flying cockroaches. Journal of Experimental Biology, 197, pp405-411.

Millikan, R. 1984. Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press.

Millikan, R. 1995. Pushmi-pullyu representations. Philosophical Perspectives, 9, pp185–200.

Morillo, C. 1990. The reward event and motivation. Journal of Philosophy, 87: 169–86.

Papineau, D. 1987. Reality and Representation. New York: Basil Blackwell.

Papineau, D. 2004. Friendly thoughts on the evolution of cognition. Australasian Journal of Philosophy, 82(3): 491-502.

Schulz, A. 2011. The adaptive importance of cognitive efficiency: an alternative theory of why we have beliefs and desires. Biology and Philosophy, 26: 31-50.

Schulz, A. 2013. The benefits of rule following: A new account of the evolution of desires. Studies in History and Philosophy of Biological and Biomedical Sciences, 44: 595-603.

Schroeder, T. 2004. Three Faces of Desire. New York: Oxford University Press.

Schroeder, T. 2009. Desire. The Stanford Encyclopedia of Philosophy. (plato.stanford.edu/entries/desire/)

Shea, N., Krug, K. and Tobler, P.N. 2008. Conceptual representations in goal-directed decision-making. Cognitive, Affective, & Behavioral Neuroscience, 8(4): 418-428.

Shea, N. 2014. Reward Prediction Error Signals are Meta-Representational, Noûs, 48(2): 314–341.

Smith, M. 1987. The Humean Theory of Motivation. Mind, 96: 36–61.

Spurrett, D. In preparation. Intragenomic conflict and intrapersonal conflict.

Spurrett, D. 2014. ‘Philosophers should be interested in ‘common currency’ claims in the cognitive and behavioural sciences’, in The South African Journal of Philosophy 33(2), pp211-221.

Sterelny, K. 2003. Thought in a Hostile World, Oxford: Blackwell.

Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction. Cambridge, Massachusetts: MIT Press.






[1] The same hair cells in the cockroach, for example, also cue direction of fleeing while the cockroach is in flight (Libersat 1994).
[2] This seems to be in tension with the suggestion that a detection system is a relatively architectural channel (what is sometimes disagreeably called ‘hard-wired’), because learning links between actions and environmental states needs a (relatively) more generic cognitive mechanism. This point is made in Papineau (2004).
[3] I am passing over worries that response breadth as described by Sterelny might not do the work that he has in mind (e.g. Papineau 2004).
[4] Sterelny, arguing that the evidence is inconclusive, disagrees with Dickson, who thinks that rats count as intentional agents.
[5] The arch anti-representationalist Brooks (1991) makes a curious choice of ally for Sterelny who, at this point in his (2003), has just spent several chapters explaining how decoupled representations arise, and are central to human intelligence.
[6] As David Haig has explained, imprinted gene activity may lead to internal deception and other internal strategic interaction (Haig 2002). Other forms of within-organism conflict may disrupt internal transparency in other ways (Burt & Trivers 2006). Although interesting, I set these phenomena aside here, and assume that internal interests coincide entirely. See Spurrett (in preparation).
[7] This quick survey of theories of desire is indebted to Schroeder (2009).
[8] Learning based theories can allow both world-modelling and planning, but do not require it. See Sutton and Barto (1998).
[9] For a review see Berridge (2004).
[10] Shea (2014) argues persuasively that temporal difference learning, which seems to be widespread in organisms with brains, actually involves some meta-representation, in the sense that the content of reward prediction errors are about other representational states.