Tuesday, April 9, 2013

Important Sources: Rodney Brooks


Introduction


Picture of the robot 'Herbert' from: 
People who think that there is a proximal common currency[1] for some type of agent think that the values of possible actions open to that agent are represented in its cognitive states.

One way to deny this is to deny that agents need cognitive states of any significant kind. This, more or less, is what Rodney Brooks went around doing in the late 1980s and early 1990s.

Here I take a quick look at his classic 1991 paper “Intelligence Without Representation”. I note some of the ways that it constitutes a challenge to some views about common currencies. In his paper Brooks talks about the design of a large class of mobile robots that he collectively calls ‘creatures’. All quotations are from Brooks (1991).

What should creatures be able to do?


In Section (4) of his article Brooks lists some ‘requirements’ for his ‘creatures’. One of these requirements is a reason for thinking that what he has to say is relevant to the common currency question:

“A Creature should be able to maintain multiple goals and, depending on the circumstances it finds itself in, change which particular goals it is actively pursuing; thus it can both adapt to surroundings and capitalize on fortuitous circumstances.”

This certainly gives the impression that creatures are supposed to cope (somehow) with multiple goals and varying conditions. That’s the sort of problem that often leads people to hypothesise a unified value representation like a common currency.

Brooks, of course, is famously anti-representation. So shortly after he has stated his goals, he gets down to describing the conventional approach he rejects, which involves ‘decomposition by function’. Here he says:

“Perhaps the strongest, traditional notion of intelligent systems (at least implicitly among AI workers) has been of a central system, with perceptual modules as inputs and action modules as outputs.”

While Brooks doesn’t say so himself, this ‘central system’ is clearly the kind of place where values (in a common currency) could be attached to possible actions. The main reason Brooks doesn’t say so, I suggest, is that the paradigmatic representationalist is also a cognitivist, who devotes much more attention to epistemic states (perception, memory, reasoning) than to motivational ones.

Next, in Section (4.2) Brooks describes the alternative approach that he favours, which is ‘decomposition by activity’. This approach doesn’t distinguish systems by their being central or peripheral, but rather into ‘activity producing sub-systems’ which he called ‘layers’, each of which ‘individually connects sensing to action’:

The layers need not all use the same sensors, or activate the same degrees of freedom. In addition, not all processing is centralized. So, speaking of a mobile robot built to avoid collisions, Brooks says:

“In fact, there may well be two independent channels connecting sensing to action (one for initiating motion, and one for emergency halts), so there is no single place where "perception" delivers a representation of the world in the traditional sense.

And when layers are added, their own parochial goals can sometimes conflict over the way to use the same degree of freedom:

‘This new layer might directly access the sensors and run a different algorithm on the delivered data. The first-level autonomous system continues to run in parallel, and unaware of the existence of the second level. For example, in [3] we reported on building a first layer of control which let the Creature avoid objects and then adding a layer which instilled an activity of trying to visit distant visible places. The second layer injected commands to the motor control part of the first layer directing the robot towards the goal, but independently the first layer would cause the robot to veer away from previously unseen obstacles. The second layer monitored the progress of the Creature and sent updated motor commands, thus achieving its goal without being explicitly aware of obstacles, which had been handled by the lower level of control.”

Although Brooks makes regular references to the goals and functions of his creatures, these goals are implicit in the design of the layers, and are not internally represented:

“The purpose of the Creature is implicit in its higher-level purposes, goals or layers. There need be no explicit representation of goals that some central (or distributed) process selects from to decide what is most appropriate for the Creature to do next.” (Emphasis added.)

He’s also clear that there is competition between the behaviours. He says that  ‘each activity producing layer connects perception to action directly’, but that while an observer might impute central control, all that the creature is, is ‘a collection of competing behaviors’.

So, if Brooks is correct, even if only about a limited class of agents, then what he says poses to related challenges to the view that orderly behaviour requires a common currency:

  • He claims that there’s a way to have multiple competing goals without a proximal common currency. 
  • He claims that multiple competing goals can be handled without a final common control pathway.

(What I’ve said are two challenges clearly overlap a great deal. But there are differences of emphasis at least. Also, as I’ll explain in future postings, there is a separate ‘final common path’ argument for a proximal currency that is independent of arguments concerning other kinds of order in choice.)

I also note that what Brooks says doesn’t directly imply the denial of an ultimate currency. Indeed, Brooks positively encourages the notion that his creatures instantiate some kind of ultimate currency by frequently saying that their behavior is ‘adaptive’. He clearly doesn’t mean that they literally evolved under natural selection. But just as clearly he is saying that their behavior satisfies some condition of success (perhaps even near-optimality) at dealing with conflicting goals.

I’ll discuss some criticisms of Brooks’ position in later postings. All I’ve really tried to do here, and for now, is note why Brooks is ‘on the reading list’ for this project.

Related postings on this blog


Forthcoming attractions on this blog

Andy Clark on economic decisions
Proximal currencies can be cognitive or somatic
Ways of responding to Brooks’ challenge

References

Brooks, R. (1991) “Intelligence without representation.” Artificial Intelligence 47: 139-159.



[1] Put more carefully, this should say Cognitive Proximal Common Currency. (Cognitive as opposed to, inter alia, somatic. This is a distinction I’ll explain and put to work at a later date.) See Currencies can be 'ultimate' or 'proximal' and future postings on this blog.

No comments:

Post a Comment