Consciousness and metarepresentation: A computational sketch

Document Type: 
Article
Article Type: 
Other
Disciplines: 
Artificial intelligence
Topics: 
Theory of Consciousness
Keywords: 
consciouness, representation, higher-order thought, neural networks
Deposited by: 
Dr Axel Cleeremans
Date of Issue: 
2007
Authors: 
Axel Cleeremans, Bert Timmermans, Antoine Pasquali
Journal/Publication Title: 
Neural Networks
Volume: 
20
Issue Number: 
9
Page Range: 
1032-1039
Official URL: 
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T08-4PN05N9-1&_user=532047&_coverDate=11%2F30%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000026678&_version=1&_urlVersion=0&_userid=532047&md5=7f5634d6f6432e8658e01b7820f4327b
Alternative URL: 
http://srsc.ulb.ac.be/axcWWW/papers/pdf/07-NN.pdf
Abstract: 
When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith’s [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer’s innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487–519] point that knowledge acquired by a connectionist network always remains “knowledge in the network rather than knowledge for the network”. That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input–output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network “observes” the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network’s internal representations become re-representations of the first-order network’s internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own’s internal representations — a geography that is itself learned over time as a result of an agent’s attributing value to the various experiences it enjoys through interaction with itself, the world, and others.
AttachmentSize
NN-Cleeremans.pdf984.76 KB