The language of thought hypothesis (LOTH),[1] sometimes known as thought ordered mental expression (TOME),[2] is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure (sometimes known as mentalese). On this view, simple concepts combine in systematic ways (akin to the rules of grammar in language) to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.
Using empirical evidence drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only 'remotely plausible' when expressed as a system of representations that is "tokened" by a linguistic or semantic structure and operated upon by means of a combinatorial syntax.[1] Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.
These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. The LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate.[3][4][5]
Presentation
editThe hypothesis applies to thoughts that have propositional content, and is not meant to describe everything that goes on in the mind. It appeals to the representational theory of thought to explain what those tokens actually are and how they behave. There must be a mental representation that stands in some unique relationship with the subject of the representation and has specific content. Complex thoughts get their semantic content from the content of the basic thoughts and the relations that they hold to each other. Thoughts can only relate to each other in ways that do not violate the syntax of thought. The syntax by means of which these two sub-parts are combined can be expressed in first-order predicate calculus.
The thought "John is tall" is clearly composed of two sub-parts, the concept of John and the concept of tallness, combined in a manner that may be expressed in first-order predicate calculus as a predicate 'T' ("is tall") that holds of the entity 'j' (John). A fully articulated proposal for what a LOT would have to take into account greater complexities such as quantification and propositional attitudes (the various attitudes people can have towards statements; for example I might believe or see or merely suspect that John is tall).
Precepts
edit- There can be no higher cognitive processes without mental representation. The only plausible psychological models represent higher cognitive processes as representational and computational thought needs a representational system as an object upon which to compute. We must therefore attribute a representational system to organisms for cognition and thought to occur.
- There is causal relationship between our intentions and our actions. Because mental states are structured in a way that causes our intentions to manifest themselves by what we do, there is a connection between how we view the world and ourselves and what we do.
Reception
editThe language of thought hypothesis has been both controversial and groundbreaking. Some philosophers reject the LOTH, arguing that our public language is our mental language—a person who speaks English thinks in English. But others contend that complex thought is present even in those who do not possess a public language (e.g. babies, aphasics, and even higher-order primates), and therefore some form of mentalese must be innate. [citation needed]
The notion that mental states are causally efficacious diverges from behaviorists like Gilbert Ryle, who held that there is no break between the cause of mental state and effect of behavior. Rather, Ryle proposed that people act in some way because they are in a disposition to act in that way, that these causal mental states are representational. An objection to this point comes from John Searle in the form of biological naturalism, a non-representational theory of mind that accepts the causal efficacy of mental states. Searle divides intentional states into low-level brain activity and high-level mental activity. The lower-level, nonrepresentational neurophysiological processes have causal power in intention and behavior rather than some higher-level mental representation.[citation needed]
Tim Crane, in his book The Mechanical Mind,[6] states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean.[6] If the meaning of sentences is explained regarding sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers).[6] Therefore, sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress.[6]
Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains, but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation.[6] John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.
LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning).[6] If LOTH cannot show that the mind knows that it is following the particular set of rules in question, then the mind is not computational because it is not governed by computational rules.[3][6] Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act by this set of rules.[6]
Another objection within representational theory of mind has to do with the relationship between propositional attitudes and representation. Dennett points out that a chess program can have the attitude of “wanting to get its queen out early,” without having representation or rule that explicitly states this. A multiplication program on a computer computes in the computer language of 1’s and 0’s, yielding representations that do not correspond with any propositional attitude.[3]
Susan Schneider has recently developed a version of LOT that departs from Fodor's approach in numerous ways. In her book, The Language of Thought: a New Philosophical Direction, Schneider argues that Fodor's pessimism about the success of cognitive science is misguided, and she outlines an approach to LOT that integrates LOT with neuroscience. She also stresses that LOT is not wedded to the extreme view that all concepts are innate. She fashions a new theory of mental symbols, and a related two-tiered theory of concepts, in which a concept's nature is determined by its LOT symbol type and its meaning.[4]
Connection to connectionism
editConnectionism is an approach to artificial intelligence that often accepts a lot of the same theoretical framework that LOTH accepts, namely that mental states are computational and causally efficacious and very often that they are representational. However, connectionism stresses the possibility of thinking machines, most often realized as artificial neural networks, an inter-connectional set of nodes, and describes mental states as able to create memory by modifying the strength of these connections over time. Some popular types of neural networks are interpretations of units, and learning algorithm. "Units" can be interpreted as neurons or groups of neurons. A learning algorithm is such that, over time, a change in connection weight is possible, allowing networks to modify their connections. Connectionist neural networks are able to change over time via their activation. An activation is a numerical value that represents any aspect of a unit that a neural network has at any time. Activation spreading is the spreading or taking over of other over time of the activation to all other units connected to the activated unit.
Since connectionist models can change over time, supporters of connectionism claim that it can solve the problems that LOTH brings to classical AI. These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions and objects in photographs and understanding nuanced gestures.[6] Fodor defends LOTH by arguing that a connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol-manipulating LOT.
Fodor and Zenon Pylyshyn use the notion of cognitive architecture in their defense. Cognitive architecture is the set of basic functions of an organism with representational input and output. They argue that it is a law of nature that cognitive capacities are productive, systematic and inferentially coherent—they have the ability to produce and understand sentences of a certain structure if they can understand one sentence of that structure.[7] A cognitive model must have a cognitive architecture that explains these laws and properties in some way that is compatible with the scientific method. Fodor and Pylyshyn say that cognitive architecture can only explain the property of systematicity by appealing to a system of representations and that connectionism either employs a cognitive architecture of representations or else does not. If it does, then connectionism uses LOT. If it does not then it is empirically false.[3]
Connectionists have responded to Fodor and Pylyshyn by denying that connectionism uses LOT, by denying that cognition is essentially a function that uses representational input and output or denying that systematicity is a law of nature that rests on representation.[citation needed] Some connectionists have developed implementational connectionist models that can generalize in a symbolic fashion by incorporating variables.[8]
Empirical testing
editThis section needs additional citations for verification. (August 2011) |
Since LOTH came to be it has been empirically tested. Not all experiments have confirmed the hypothesis;
- In 1971, Roger Shepard and Jacqueline Metzler tested Pylyshyn's particular hypothesis that all symbols are understood by the mind in virtue of their fundamental mathematical descriptions.[9] Shepard and Metzler's experiment consisted of showing a group of subjects a 2-D line drawing of a 3-D object, and then that same object at some rotation. According to Shepard and Metzler, if Pylyshyn were correct, then the amount of time it took to identify the object as the same object would not depend on the degree of rotation of the object. Their finding that the time taken to recognize the object was proportional to its rotation contradicts this hypothesis.
- There may be a connection between prior knowledge of what relations hold between objects in the world and the time it takes subjects to recognize the same objects. For example, it is more likely that subjects will not recognize a hand that is rotated in such a way that it would be physically impossible for an actual hand.[citation needed] It has since also been empirically tested and supported that the mind might better manipulate mathematical descriptions in topographical wholes.[citation needed] These findings have illuminated what the mind is not doing in terms of how it manipulates symbols.[citation needed]
- Certain deaf adults who neither have capability to learn a spoken language nor have access to a sign language, known as home signers, in fact communicate with both others like them and the outside world using gestures and self-created signing. Although they have no experience in language or how it works, they are able to conceptualize more than iconic words but move into the abstract, suggesting that they could understand that before creating a gesture to show it.[10] Ildefonso, a homesigner who learned a main sign language at twenty-seven years of age, found that although his thinking became easier to communicate, he had lost his ability to communicate with other homesigners as well as recall how his thinking worked without language.[11]
- Other studies that have been done to discover what thought processes could be non-lingual include a study done in 1969 by Berlin and Kay which indicated that the color spectrum was perceived the same no matter how many words a language had for different colors, and a study done in 1981 and fixed 1983 which alluded, that counterfactuals are processed at the same rate, ease of conveying through words notwithstanding.[12]
- Maurits (2011) describes an experiment to measure the word order of the language of thought by the relative time needed to recall the verb, agent, and patient of an event. It turned out that the agent was recalled most quickly and the verb least quickly, leading to a conclusion of a subject–object–verb language of thought (SOVLOT).[13] Surprisingly, some languages, e.g., Persian language, have this ordering form, meaning that the brain needs less energy to convert the concepts in this languages into the thought concepts.
See also
editNotes
edit- ^ a b "The Language of Thought Hypothesis". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2019.
- ^ Tillas A. (2015-08-16). "Language as grist to the mill of cognition". Cogn Process. 16 (3): 219–243. doi:10.1007/s10339-015-0656-2. PMID 25976728. S2CID 18301424.
- ^ a b c d Murat Aydede (2004-07-27). The Language of Thought Hypothesis. Metaphysics Research Lab, Stanford University.
- ^ a b Schneider, Susan (2011). The Language of Thought: a New Direction. Boston: Mass: MIT Press.
- ^ Fodor, Jerry A. (1975-01-01). The Language of Thought. Harvard University Press. ISBN 9780674510302.
- ^ a b c d e f g h i Crane, Tim (2005). The mechanical mind : a philosophical introduction to minds, machines and mental representation (2nd, repr. ed.). London: Routledge. ISBN 978-0-415-29031-9. Archived from the original on 2015-01-04. Retrieved 2013-01-15.
- ^ James Garson (2010-07-27). Connectionism. Metaphysics Research Lab, Stanford University.
- ^ Chang, Franklin (2002). "Symbolically speaking: a connectionist model of sentence production". Cognitive Science. 26 (5): 609–651. doi:10.1207/s15516709cog2605_3. ISSN 1551-6709.
- ^ Shepard, Roger N.; Metzler, Jacqueline (1971-02-19). "Mental Rotation of Three-Dimensional Objects". Science. 171 (3972): 701–703. Bibcode:1971Sci...171..701S. CiteSeerX 10.1.1.610.4345. doi:10.1126/science.171.3972.701. PMID 5540314. S2CID 16357397.
- ^ Coppola, M., & Brentari, D. (2014). From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Frontiers in Psychology, 5, 830. [1]
- ^ Downey, G. (2010, July 21). Life without language. Retrieved December 19, 2015, from [2]
- ^ Bloom, P., & Keil, F. (2001, September 1). Thinking Through Language. Retrieved December 19, 2015, from [3]
- ^ Luke Maurits. Representation, information theory and basic word order. University of Adelaide, 2011-09. Accessed 2018-08-14.
Bibliography
edit- Ravenscroft, Ian, Philosophy of Mind. Oxford University Press, 2005. pp 91.
- Fodor, Jerry A., The Language Of Thought. Crowell Press, 1975. pp 214.
- John R. Searle (June 29, 1972). "Chomsky's Revolution in Linguistics". New York Review of Books. Archived from the original on September 11, 2013.
External links
edit- Rescorla, Michael. "The Language of Thought Hypothesis". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
- Katz, Matthew. "The Language of Thought Hypothesis". Internet Encyclopedia of Philosophy.
- Language of Thought Archived 2007-06-15 at the Wayback Machine - By Larry Kaye.
- Revealing The Language Of Thought - By Brent Silby
- Jerry Fodor Homepage
- The Language Of Thought Hypothesis: State Of The Art - By Murat Aydede