Computation, Implementation, Cognition

Please download to get full document.

View again

of 20
7 views
PDF
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Document Description
Putnam (Representations and reality. MIT Press, Cambridge, 1988) and Searle (The rediscovery of the mind. MIT Press, Cambridge, 1992) famously argue that almost every physical system implements every finite computation. This universal implementation
Document Share
Document Tags
Document Transcript
    Computation, implementation, cognition Oron Shagrir 1.   Introduction Hilary Putnam (1988) and John Searle (1992) famously argue that almost every physical system implements every finite computation. 1  This claim, if correct, puts at the risk of triviality certain functional and computational views of the mind. One such view is the computational sufficiency   thesis  (CST), which is articulated by David Chalmers in "A Computational Foundation for the Study of Cognition" (Chalmers 2012). CST states that "the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental  properties". But if every physical system implements every computational structure (as Putnam and Searle claim), and if CST is true, then every physical system implements the computational structure that suffices for cognition ("a possession of a mind"). Hence, every physical system is a cognitive system. If CST is true, in other words, then rocks, chairs and planets have the kind of cognition that we do. Chalmers (1996) and others have challenged the universal implementation claim; Chalmers offers, instead, a theory of implementation that allegedly avoids the pitfalls of universal implementation. My aim in this paper is to argue that the available theories of implementation are consistent with the nomological possibility of systems that simultaneously implement 1  An early version (from the 1970s) of this argument is attributed to Ian Hinckfuss (see in Cleland 2002). Hinckfuss  points out that under a suitable categorization of states, a bucket of water sitting in the sun (The "Hinckfuss' pail") can be taken to implement the functional organization of a human agent.   2 different complex automata; hence, if CST is true, each such system might simultaneously  possess different minds. Elsewhere I argue that these cases undermine CST (Shagrir 2012). My focus here is on theories of implementation. After discussing Chalmers's theory, I will examine whether other proposals can avoid this simultaneous implementation result. 2.   Putnam's argument In the appendix to  Representation and Reality  (1988: 121-125) Putnam advances the claim that every physical system that satisfies minimal conditions implements every finite state automaton. He proves that every ordinary open system is a realization of every abstract finite automaton. Searle (1992) advances a somewhat similar claim, arguing that the wall behind him implements the Wordstar program. These claims clearly threaten CST. They imply that the human and the rock have the same functional organization. But if a functional organization of a certain complexity is sufficient for having a mind, as CST states, then the rock too should be deemed to have a mind. In fact, almost everything, given that it realizes this automaton, has a mind. Moreover, if Putnam’s theorem is true, then my brain simultaneously implements infinitely many different functional organizations, each constituting a different mind. It thus seems that I should simultaneously be endowed with all possible minds! Putnam’s theorem pertains to abstract f  inite state automata (FSA) without I/O (inputs/outputs). Take the FSA that runs through the state-sequence ABABABA in the time interval we want to simulate. Here A and B are the states of the FSA. Let us assume that a rock can realize this run in a 6-minute interval, say from 12:00 to 12:06. Assume that the rock is in a maximal physical state S 0  at 12:00, S 1  at 12:01, and so forth (a maximal physical state being its total physical makeup   3 specified in complete detail). Also assume that the states differ from each other (this is Putnam’s Principle of Noncyclical Behavior). Now let us define a physical state a  as S 0  v S 2  v S 4  v S 6 , and state b  as S 1  v S 3  v S 5 . The rock implements the FSA in the sense that the causal structure of the rock "mirrors" the formal structure of the FSA. The physical state a  corresponds to the logical state A, the physical state b  corresponds to the logical B, and the causal transitions from a  to b  correspond to the computational transitions from A to B. A complete proof would require further elaboration, as well as a Principle of Physical Continuity. But it applies to any I/O-less FSA. Putnam notes (p. 124) that the proof cannot be immediately extended to FSA with I/O. If the I/O are functionally individuated, then they can be treated much like abstract internal states, and the extension is more natural. But if the I/O are specific kinds of physical organs, then the rock, which lacks motor or sensory organs of the required sort, cannot realize the automaton. The rock cannot implement a mind because it lacks the motor and sensory organs of thinking organisms. CST is in real trouble, however, if the difference between rocks and humans is exhausted by the I/O issue. After all, the whole point of CST is that the difference between thinking organisms and rocks is rooted in the complexity of functional organization. But if Putnam’s argument is correct, the difference between humans and rocks does not lie in the complexity of their respective internal computational arrangements (which turn out to be the same for humans and rocks), but in the kinds of I/O they can handle. This, it seems, is just a behaviorism in a new guise. Indeed, Putnam points out (pp. 124-125) that reliance on physical I/O is a collapse into  behaviorism. Computationalism improves on behaviorism by taking into account not only I/O,  but also the mediating algorithm. Behaviorism is false because there are beings that have the same I/O dependencies as humans, but different mentalities (Block, 1995). Computationalism holds that the reason they differ is that the "internal" relations among the logical states differ, that   4 is, the implemented algorithm differs. But consider a physical object with the right kind of I/O. What is true of this entity, on Putnam's theorem, is that it realizes all the possible algorithms that mediate the I/O. There can be no difference between the algorithms implemented by this entity and the algorithms implemented by humans. "In short, 'functionalism', if it were correct, would imply behaviorism! If it is true that to possess given mental states is simply to possess a certain 'functional organization', then it is also true that to possess given mental states is simply to  possess certain behavior dispositions!" (1988:124-125). 3.   Chalmers's reply Several people attempted to downplay Putnam's result, arguing that it takes more than Putnam allows to implement the functional organizations that are minds. 2  Chalmers (1996) provides a detailed counterargument along these lines. His contention is that there are constraints on the notion of implementation that are not taken into account by Putnam, constraints not satisfied by rocks. For one thing, the state transitions of the implementing machine must be reliable and counterfactual supporting. For another, the causal structure of the physical object should mirror all the possible formal state transitions of the implemented FSA. In Putnam’s proof, the rock implements only a  single run  (the transition from A to B and back), but not other runs that might exist. If the FSA has other state transitions, e.g., C  D and D  C, these transitions should also  be mirrored by the rock's dynamics. It thus follows, according to Chalmers, that Putnam’s proof applies to relatively simple kinds of automata, but not to the combinatorial state automata (CSA) that are more likely to be the minds 2  See, for example, Block (1995), Chrisley (1994), Copeland (1996), and Melnyk (1996).   5 implemented by brains. Roughly, a CSA is much like a FSA, except that it has a more complex, combinatorial internal structure. Each state is a combination of substates, and any state transition is sensitive to the combinatorial structure of the previous combined state: "CSAs are much less vulnerable to Putnam-style objections [than] FSAs. Unlike FSA implementations, CSA implementations are required to have complex internal structure and complex dependencies among their parts. For a complex CSA, the right sort of complex structure will be found in very few physical systems" (1996: 325). Chalmers concludes that brains, but not rocks, implement the complex CSA that is more likely to constitute a mind. I do not dispute these claims. Moreover, I agree with Chalmers about further claims he makes concerning implementation. One claim is that every system implements an FSA: A rock implements a trivial one-state automaton. Another claim is that many physical systems typically implement more than one FSA. In particular, if a system implements a more complex FSA, it typically simultaneously implements simpler FSAs. A third claim is that only a few physical systems implement very complex CSAs. In particular, very few physical systems implement a CSA that ( if   CST is true) suffices for cognition. Chalmers points out that the chance of an arbitrary physical system implementing such a CSA is very low. I agree with this too.  Nevertheless, I want to point out that one could construct systems that simultaneously implement different complex CSAs. Though I do not have a mathematical proof for this, I want to illustrate my claim with an example of simple physical systems that sumultanously implement diferent automata. The description I use is different from the one used by Chalmers, who describes automata in terms of "total states". I describe them in terms of gates (or "neural cells"), which are
Similar documents
View more...
Search Related
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks