Plenary Lectures

Symposia

Presidential Address: An integrated information theory of consciousness

Giulio Tononi
Department of Psychiatry
University of Wisconsin-Madison

The integrated information theory (IIT) starts from phenomenology and makes use of thought experiments to claim that consciousness is integrated information. Specifically: (i) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (ii) the quality of experience is specified by the set of informational relationships generated within that complex. Integrated information is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms. Together, the set of informational relationships within a complex constitute a shape in Q that completely and univocally specifies a particular experience. Several observations concerning the neural substrate of consciousness fall naturally into place within the IIT framework. Among them are the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the distinct role of different cortical architectures in affecting the quality of experience. Equating consciousness with integrated information carries several implications for our view of nature.

Keynote Lecture 1: Origins of Shared Intentionality

Michael Tomasello
Department of Developmental and Comparative Psychology
MPI for Evolutionary Anthropology

Keynote Lecture 2: Armchair reflections on consciousness and the science of consciousness

Jaegwon Kim
Department of Philosophy
Brown University

There are two principal philosophical questions about consciousness and the science of consciousness. The first, well-known and by-now a bit boring, is this: Is it possible to give a neuroscientific explanation, or account, of consciousness? The second question, less widely discussed, is the converse of the first: Does, or can, consciousness itself play a theoretical-explanatory role in neuroscience?

Concerning the first question, the British Emergentists, like C.D. Broad and C. Lloyd Morgan, had their views about what such an explanation would require. Briefly, their idea was that an explanation of consciousness required the deduction of truths about consciousness from the truths (including laws) about neural/biological facts. I believe that this idea, which I believe enjoys much intuitive plausibility, survives in the works of some contemporary writers – for example, in the idea of “a priori physicalism” advocated by David Chalmers and Frank Jackson. I will try to show that in spite of the initial plausibility, this approach involves certain complexities and difficulties, and even a possible incoherence.

As regards the second question about the theoretical-explanatory role of consciousness, the ongoing debates over the causal efficacy of consciousness is directly relevant. I will briefly sketch the well known exclusion argument. What the argument presumptively shows is that unless consciousness is physically reduced, it cannot be credited with causal powers to affect the course events in the physical world. Physical reducibility of consciousness is closely related to the first question above regarding its explainability within brain science. I believe that a near-consensus opinion is that such reduction is not possible, and that consciousness, though grounded in neural processes, is not itself a neural process. If the exclusion argument is in the right ballpark, the physical irreducibility of consciousness will make consciousness an epiphenomenon – a phenomenon with no powers to causally affect anything else, at least in the physical domain.

From my armchair, it looks as though brain scientists indeed accept this epiphenomenalist implication in their research practices – that is, they appear to practice what may be called “methodological epiphenomenalism”. That is, consciousness, as a phenomenon distinct from neural processes, has no theoretical-explanatory role to play in neural science; whether or not it is explainable in terms of neural processes, it itself does not enter into explanation, or prediction, of neural phenomena with its own causal-explanatory power.

Finally, these reflections point to yet another question. If a given group of phenomena are epiphenomenal, with no powers to cause anything else, including readings on measuring instruments, how is it possible to investigate them scientifically? If consciousness is without causal powers to affect physical phenomena (including the investigator’s sensory systems, brain-scanning devices, etc.), how is a “science” of consciousness possible? Are the working scientists engaged in consciousness research all closet physicalist reductionists? Or perhaps what they are investigating is not consciousness but something else? If consciousness has no physical effects, how can the multi-colored, pulsating images shown on the television monitor hooked up to an fMRI process be relevant to a scientific study of consciousness?

Background reading

  • J. Kim, Physicalism, or Something Near Enough (2005), chapters 1, 6.
  • J. Kim, “The Causal Efficacy of Consciousness”, in The Blackwell Companion to Consciousness (2007), ed. Max Velmans and Susan Schneider.

William James Prize Winner: To be announced on Monday the 8th of June.

Keynote Lecture 3: What is the Explanatory Gap?

David Papineau
Department of Philosophy
King’s College London

I shall offer a simple diagnosis of the so-called 'explanatory gap'. The 'gap' is nothing to do with the inability of materialism to explain or derive facts about consciousness, as most philosophers suppose, but simply due to the way that materialism remains so strongly counterintuitive even after we are presented with the overwhelming evidence in its favour. From this perspective, the 'gap' is not some philosophical or scientific problem that remains after we embrace materialism, but rather a result of the psychological difficulty of embracing materialism properly in the first place. (We feel that that there is something mysterious about the mind-brain relation only because we can't rid ourselves of the dualist thought that the brain somehow magically 'gives rise to' an extra realm of conscious feelings.)

After demonstrating that all of us (including professed materialists) are subject to such an intuition of dualism, I shall address the question of why this intuition should persist even though theoretical considerations show that it must be false. This is a psychological issue. I shall consider five different empirical hypotheses about the source of the intuition and will conclude that the intuition stems from the peculiar concepts that we use to think about conscious states.

Keynote Lecture 4: The development of a theory of mind: a tutorial

Susan Carey
Department of Psychology
Harvard University

On some accounts of consciousness, higher order thought is required for consciousness. Representations of mental states, including concepts of mental states, are deemed necessary for consciousness. Depending upon the facts about non-human animals and prelinguistic infants, these views may have the consequence that animals and infants cannot be conscious. I will not comment on the higher order theory of consciousness, but rather give a tutorial on the current state of the art concerning the evolution and ontogenesis of representations of mental states.

Keynote Lecture 5: Human volition: Towards a neuroscience of will

Patrick Haggard
Institute of Cognitive Neuroscience
University College London

The capacity for voluntary action is seen as essential to human nature. Yet neuroscience and behaviourist psychology have traditionally dismissed the topic as unscientific, perhaps because the mechanisms that cause actions have long been unclear. However, new research has identified networks of brain areas, including the pre-supplementary motor area, the anterior prefrontal cortex and the parietal cortex, that underlie voluntary action. These areas generate information for forthcoming actions, and also cause the distinctive conscious experience of intending to act and then controlling one’s own actions. Volition consists of a series of decisions regarding whether to act, what action to perform and when to perform it. Neuroscientific accounts of voluntary action may inform debates about the nature of individual responsibility.

Background reading

  • Haggard P (2008). Human volition: towards a neuroscience of will. Nature Reviews Neuroscience, 9, 934–946.
  • Haggard P. (2005). Conscious Intention and Motor Cognition. Trends in Cognitive Science, 9, 290–295.