ASSC 21 Tutorials
Oliver Mason (University of Surrey, UK; University College London, UK)
1. Background to use of sensory deprivation paradigms in research focusing on their use in inducing altered states of consciousness, dissociation. hallucinations and other psychotic-like states. Some coverage of psychophysiological findings related to sensory deprivation. Coverage of ethical issues in research using these paradigms.
2. Provision of equipment including ganzfeld goggles and auditory stimuli to demonstrate sensory restriction paradigms.
3. Facilitated discussion of the topic.
Note: There will be equipment available for attenders to try out including ganzfeld goggles and auditory stimuli to demonstrate sensory restriction paradigms. This will feed into discussion of potential research uses as well as broader discussion points arising from the background covered in the tutorial.
TUTORIAL 2: “Why Illusionism The Case for a Progressive Consciousness Research Program”
Daniel Dennett (Tufts University, US)
Keith Frankish (The Open University, UK)
Enoch Lambert (Tufts University, US)
Illusionism is the view that there is no such thing as phenomenal properties of consciousness. What needs explaining are not phenomenal properties, but the strong intuitive pull that there are such. While the position is often dismissed if considered at all, strong defenses of it exist. Most recently, Keith Frankish, in “Illusionism as a Theory of Consciousness”, has forcefully re-advanced illusionism as a primary contender in consciousness debates. His paper places the view in relation to other leading ones and nicely reviews the arguments for and against it. He also helpfully reviews the positions of previous illusionists such as Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey.
The proposed tutorial aims to promote understanding of the illusionist position and to show why, despite initial appearances, it can be considered the most promising position on consciousness for scientific research. To take a simple and much discussed phenomenon, does color exist in the physical world? If not, is it an illusion? Better answers grow out of the illusionist position.
Illusionism is often dismissed due to charges of denying the undeniable data of personal introspection. But the case for illusionism reveals the persistent incoherencies that arise when trying to pin down the alleged data, as well as when trying to account for experimental data indicating major mistakes in what we attribute to consciousness. The first part of the tutorial will address these issues.
Trying to explain why we say what we do about consciousness is one of the more important tasks for any theory of consciousness. Illusionists such as Nicholas Humphrey and Daniel Dennett have offered some of the more intriguing possible explanatory hypotheses for how we come to have the illusion of phenomenal consciousness. The second part of the tutorial will look at how such hypotheses can help move the scientific investigation of consciousness forward.
TUTORIAL 3:“ Quantifying metacognition: measures, models and theories”
Stephen Fleming（University College London, UK）
Metacognition describes the ability to reflect on and control our performance in various domains – “thinking about thinking”. Measurement of metacognition typically focuses on the correspondence between confidence and trial-by-trial accuracy. In general, for healthy subjects endowed with metacognitive sensitivity, when one is confident, one is more likely to be correct. Thus the degree of association between accuracy and confidence can be taken as a quantitative measure of metacognition. Many studies use a statistical correlation coefficient (e.g. Pearson's r) or its variant to assess this degree of association, but such measures are susceptible to undesirable influences from factors such as response biases and often conflate task performance with metacognitive ability. This tutorial is aimed both at researchers who are seeking a practical introduction to bias-free methods for quantifying metacognition (e.g. meta-d’) and experimentalists/theorists who are interested in how these measures connect to recent models of metacognition and awareness. No prior knowledge will be assumed. Part 1 will provide an overview of the topic, explore practical measurement techniques grounded in second-order reports (e.g. confidence ratings, opt-out designs), and introduce metacognitive measures based on signal detection theory and receiver operating characteristic (ROC) analysis that are “bias free”. I will distinguish between the concepts of metacognitive bias, sensitivity and efficiency. Wherever possible, worked examples using MATLAB code will be provided, and interactive discussion will be encouraged. Part 2 will focus on theories and models of metacognition, in particular recent accounts of performance/confidence dissociations. I will explore the relationship between different facets of metacognition such as confidence and error monitoring, and review evidence on the domain-generality/specificity of metacognitive processes. I will close by critically examine the relationship between metacognition and primary (e.g. sensory) consciousness.
TUTORIAL 4: “Computational and Theoretical Foundations of the Predictive Mind”
Ryota Kanai (Araya Brain Imaging, Japan)
Jakob Hohwy (Monash University, Australia)
The free energy principle and predictive coding hypothesis suggest that the brain performs a statistical inference by making top-down predictions about the environment while constantly updating the estimate with bottom-up prediction errors. There has been a growing interest in the role of predictive processing among consciousness researchers as a possible unifying principle for brain functions associated with conscious experience. The core idea of predictive coding is that the brain learns generative models of the environment from statistical regularities, and uses them to form a hypothesis through perception and acts to test the hypothesis through action by minimizing prediction errors arising from the sensory inputs. This idea unifies perception and action as an integrated system and offers a general theoretical framework to account for learning, attention and intention. In this tutorial, we will introduce the free energy principle and predictive coding at a theoretical level in relation to consciousness. Then, we guide the audience through the mathematical formulations step-by-step in an accessible form. We will start from the basic ideas behind variational Bayesian inference and provide a starting point for those who are interested in learning the theory but find the original papers difficult to follow. In particular, we will make clear the dependency of knowledge across published papers so that new learners can study the relevant papers efficiently. Then, we will discuss current empirical results that have linked conscious perception to the predictive coding framework. Finally, we will discuss how free energy principle and predictive coding can be related to modern deep neural network architecture, and suggest possible implementations into neural networks. We will encourage questions and discussion during the tutorial to facilitate understanding, while providing supplemental online materials for those who wish to study the theories further.
Note: We will make the tutorial interactive by encouraging questions at several points and by limiting the amount of material we will go through. While the concept of the free energy principle is simple, mathematical details are often rather difficult to tackle; we shall mainly go slowly over this material. We will have an interlude at the 1.20-2 hour mark where we will elicit suggestions and discussion from the audience about how to test the free energy principle empirically. We explicitly schedule 20 minutes for discussion at the end. Prior to the tutorial, we will build a webpage to provide the outline of the tutorial and a list of relevant papers to maximise audience’s gain of knowledge during the tutorial.
TUTORIAL 5:“ Novel neuroimaging techniques for the non-invasive study of consciousness”
Lucas Parra (The City University of New York, US)
Jacobo Sitt (Pitié Salpêtrière Hospital, France)
Enzo Tagliazucchi (Netherlands Institute for Neuroscience, Netherlands)
Neuroimaging has consolidated its position as a powerful method for the detection of consciousness in healthy subjects and in unresponsive patients. The standard approach relies on the mass univariate correlations between neurophysiological signals (e.g. BOLD signal in fMRI, spectral power changes in EEG) and tasks or sensory stimulation. Alternatively, the synchronization of spontaneous signal fluctuations has been extensively explored as an index of functional coupling between brain regions
The application of these methods is limited in its capacity to reveal spatio-temporal evolving patterns of connectivity. Another important limitation is their univariate nature, imposing limits on the detection of higher order interactions in the data. Finally, traditional neuroimaging approaches are employed in the context of carefully designed experimental manipulations, but are generally less useful to interrogate the response to the natural stimuli comprising our everyday experience.
We will present a series of cutting-edge methods overcoming these limitations. These are particularly useful for the study of consciousness, since converging evidence suggests that consciousness can be associated to the spatio-temporal complexity of brain activity; in particular, to a large repertoire of dynamic brain states (i.e. differentiation) maintaining a sufficient degree of functional cohesion (i.e. integration). Conscious perception of natural stimuli remains a relatively open subject requiring novel algorithms that can study brain activity independently of well-defined external cues.
Lucas Parra will present algorithms allowing the detection of an attentional state based in inter-subject EEG synchronization to natural stimuli. Enzo Tagliazucchi will discuss the detection of complex patterns such as scale-free avalanches and spatio-temporal correlations in fMRI and M/EEG data, which have been linked to the level of conscious awareness. Finally, Jacobo Sitt will present a computational framework to infer the dynamic evolution of transient network states based on fMRI and M/EEG data, and their association with states of consciousness.
Note: We will not be limited to a theoretical exposition of the methods but will present, instead, live applications to sample data. In these parts of the tutorial the audience will be able to carefully follow the general procedure underlying the methods, understand possible pitfalls and develop an idea of how to apply the tools to their own data. To facilitate the replicability of the tutorial sessions, the sample data and algorithms can be provided to the audience upon request. In each Q&A session we will put the focus on three main points:
• The conditions under which the methods can be applied, including possible pitfalls and artifacts and how to avoid them.
• The interpretation of the results yielded by the methods in neurobiological terms and their usefulness to non-invasively investigate consciousness in humans (with emphasis on information that cannot be provided by classical approaches)
• Brainstorming on possible experimental setups and research questions that can benefit from the methods (with an emphasis on the own research questions of the audience).
Devin B. Terhune (University of London，UK)
Vince Polito (Macquarie University，Australia)
Amanda J. Barnier (Macquarie University，Australia)
The study of hypnosis can provide valuable information regarding the nature of consciousness. Investigating responses to hypnotic suggestions in highly suggestible individuals can yield numerous insights into agency, cognitive control, and conscious awareness. Hypnosis can also be used in an instrumental manner to systematically induce, disrupt, or otherwise alter a host of processes related to consciousness. In turn, hypnosis can aid us in investigating different phenomena that are otherwise difficult to experimentally manipulate in a laboratory setting. The central aim of this tutorial is to give a broad introduction to experimental hypnosis research. First, we will first provide a brief history of hypnosis and introduce the instruments and procedures used by hypnosis researchers. We will devote considerable time to the measurement of hypnotic suggestibility and discuss the developmental and genetic determinants of hypnotic suggestibility and assess evidence for its cognitive and personality correlates. Next, we will describe and weigh the evidence for different theories of hypnosis and review research bearing on the cognitive and neural basis of hypnotic responding. Finally, we will conclude by outlining the use of hypnosis as an experimental technique for studying consciousness and describe how it can be utilized to investigate different research questions. In particular, we will discuss the use of hypnosis in the study of agency, attention, awareness, memory, and perception. This tutorial will provide attendees with a comprehensive understanding of current knowledge of hypnosis.
TUTORIAL 7:“ Using Bayes factors to provide evidence for no effect”
Zoltan Dienes (University of Sussex, UK)
I ran Bayes tutorials at ASSC 17 in San Diego and ASSC 19 in Paris. I am proposing to run a very similar one again as Bayes becomes increasingly recognized as a necessary inferential technique (for example the Association for Psychological Science recently declared that results sections in any of their journals could be purely Bayesian with no significance testing). I will incorporate the latest thinking on the material (it is an area of rapid development).
The purpose of the tutorial is to present simple tools for asserting no effect (no conscious knowledge, no interaction, no correlation, etc). In particular, people will be taught how to apply Bayes Factors to draw meaningful inferences from non-significant data, using free easy-to-use on-line software: Software which allows one to determine whether there is strong evidence for the null and against one’s theory, or if the data are just insensitive, a distinction p_values cannot make. These tools have greater flexibility than power calculations and allow null results to be interpreted over a wider range of situations. Such tools should allow the publication of null results to become easier.
While the tools will be of interest to all scientists, they are especially relevant to researchers interested in the conscious/unconscious distinction, because inferring a mental state is unconscious often rests on affirming a null hypothesis. For example, for perception to be below an objective threshold, discrimination about stimulus properties must be at chance. Similarly, for perception to be below a subjective threshold by the zero correlation criterion, ability to discriminate one’s own accuracy must be at chance with a meta-d’ etc. To interpret a non-significant result, what is needed is a non-arbitrary specification of the distribution of discrimination abilities given conscious knowledge. Conventional statistics cannot solve this problem, but Bayes Factors provide an easy simple solution. The solution is vital for progress in the field, as so many conclusions of unconscious mental states rely on null results with no indication of whether the non-significant result is purely due to data insensitivity.
Note: The online software can be used by entering a handful of numbers (e.g. means and standard errors) so the audience can readily work through examples on their laptops, both examples I discuss and also new ones anyone wishes to bring up. I will provide constant examples throughout. Further, past delivery of the seminar (almost monthly over the world for the past several years) has resulted in much discussion, so I can confirm my style does invite audience participation. The topic naturally lends itself, as everyone will have relevant experience (by virtue of being a consumer of significance tests) and any user of statistics will find the material directly relevant, and thus have questions about how it applies to the specifics of their research.
TUTORIAL 8: “Deep Learning: Implementing Consciousness”
Antoine Pasquali (XCompass Ltd.，Japan)
68 years have passed since Hebb depicted the first neurological rule that would allow connectionist networks to learn associations between pairs of inputs and outputs. Since then bio-inspired models and algorithms haven’t stopped increasing both in efficacy and complexity, but major breakthroughs have remained scarce and research slow. Recently though, the field has seen a renewed interest with performance achievements at par or better on cognitive abilities that were so far considered unique to the human mind. Indeed, Deep Neural networks are quickly closing the gap to human-level performance in a whole flock of tasks: object classification, facial recognition, speech recognition, medical diagnosis, and even complex planning and decision making such as required by Go playing. Hence, perhaps a relevant question to ask ourselves right now is: How far are we from achieving “Machine Consciousness”? After an introduction to the field, I will clarify the underlying logic of connectionist networks so that they can be fully understood by everyone, independent of their background. I will then explain in detail how different models and methods work (wide panel including, among others, the Perceptron, the Forward/Inverse model, the Metacognitive network, and the Convolutional Neural network), and make an in-depth parallel with the cognitive mechanisms that they replicate. We will discuss the strengths and the weaknesses of these models, as well as the tasks they were designed for and can be further applied to. We will then enumerate together specific cognitive functions in relation to Consciousness and discuss the best methods to implement them. Finally, I will review the main deep learning libraries that are currently available online so that everyone can go back home with the right set of tools to apply deep learning in the context of their own research projects.
TUTORIAL 9:“ The empirical study of altered states of consciousness: Common standards in the psychometric assessment of subjective experiences”
Timo Torsten Schmidt（Institute of Cognitive Science，Germany）
The experimental induction of altered states of consciousness (ASCs) constitutes a unique research opportunity to relate changes in phenomenological states to underlying neuronal mechanisms. A variety of pharmacological as well as non-pharmacological methods, such as breathing techniques or sensory deprivation, can induce ASCs in humans. Subjective reports suggest that ASCs, even when induced by different methods, share certain aspects of experiences. To clarify if shared subjective experiences also share neuronal mechanisms, an accurate psychometric assessment of subjects’ experiences is necessary.
Multiple questionnaires have been developed based on qualitative reports and philosophical conceptualizations to quantify the phenomenology of ASCs. Here, I present an overview on available psychometric tools, their theoretical background, and validation. I will discuss the questionnaires which cover a broad range of different experiences in contrast to those that were designed to assess induction method specific effects, e.g., the effects typical to hallucinogens. Addressing a broad range of ASC experiences is required for the identification of common phenomenological structures of differently-induced ASCs. Based on their phenomenological scope and on how much they have been used in previous studies, I present recommendations for questionnaires to assess ASC phenomena in future neuroscientific experiments. Common standards for this rapidly extending body of research will foster comparability across different phenomenological states (‘phenomenological patterns’) and different studies. The comparison across studies represents an empirical framework to test how alterations in subjective experiences can be mapped onto brain functions and related to current theories on global brain function.
TUTORIAL 10:“ Intra-neuronal origins of consciousness – The ‘Orch OR’ theory”
Stuart Hameroff (The University of Arizona, US)
The nature of consciousness, the mechanism by which it occurs in the brain and its place in the universe are unknown. A common assumption is that consciousness emerges from complex computation among brain neurons whose states are likened to fundamental units of information, or ‘bits’. However this assumption generates neither testable predictions nor a logical rationale for why computation per se should result in consciousness. Accordingly, prominent neuroscientists and philosophers resort to ‘panpsychism’, the notion that experiential ‘qualia’ are features of the universe, that consciousness is, in some sense, an intrinsic component of reality, akin to mass, spin or charge. But reduction to panpsychism confronts quantum physics.
In the mid 1990s, Roger Penrose and I began to suggest that consciousness depends on quantum processes in microtubules inside brain neurons. Microtubules are cylindrical lattice polymers of the protein tubulin, the brain’s most prevalent protein, which self-assemble to form and shape neurons and regulate synaptic plasticity. We hypothesized that microtubules, e.g. mixed polarity microtubule networks within dendrites and soma of cortical layer 5 pyramidal cells, could act as quantum computers whose basic ‘bit-like’ units were states of individual tubulins. Such tubulin states depended on orientation of oscillating intra-tubulin pi resonance dipoles which could exist in quantum superposition of alternative possible orientations simultaneously, ‘quantum bits’, or ‘qubits’. Further, we suggested the quantum vibrations in dendritic-somatic microtubules could interact across scale, be ‘orchestrated’ by synaptic inputs, memory, and resonance, and regulate synaptic plasticity and axonal firings.
The continuous Schrodinger evolution of each such quantum superposition in collections of microtubules were proposed to terminate in accordance with the specific Diosi-Penrose (‘DP’) scheme of objective reduction (‘OR’) of the quantum state. ‘Orchestrated’ OR (‘Orch OR’) events were taken to result in moments of full conscious awareness and/or choice (akin to Whitehead ‘occasions of experience’), and sequences of such moments to give rise to our familiar ‘stream of consciousness’. The DP form of OR is related to the fundamentals of quantum mechanics and space-time geometry, so Orch OR suggests a connection between brain microtubules and basic structure of the universe.
Though skeptically criticized, Orch OR generates testable neurobiological predictions, many of which have been validated, and none refuted. It is consistent with, and may underlie, neural-level theories including Global Workspace, Predictive Coding, Integrated Information Theory and Higher Order Thought. Orch OR is a rigorous scientific theory suggesting orchestrated quantum vibrations in microtubules inside neurons are the ground floor of a recursive brain hierarchy, and that consciousness is more akin to music than to computation.