How can we know when we know we know? Towards measuring metacognition 

Document Type: 
ASSC Conference Item
Article Type: 
Theoretical
Disciplines: 
Psychology
Topics: 
Cognition
Keywords: 
Metacognition, signal detection theory
Deposited by: 
Adam Barrett
Contact email: 
adam.barrett@sussex.ac.uk
Date of Issue: 
2012
Authors: 
Barrett, Adam B and Seth, Anil K
Event Dates: 
2-6 Jul 2012
Event Location: 
Brighton, UK
Event Title: 
16th annual meeting of the Association for the Scientific Study of Consciousness
Event Type: 
ASSC Conference
Presentation Type: 
Talk
Refereed: 
No
Publish status: 
Unpublished
Abstract: 

Normal 0 false false false EN-GB ZH-CN HE

A major goal in consciousness science is to discriminate between unconscious and conscious processes. Behaviourally, conscious cognition can be inferred by measuring metacognition, (i.e. knowledge of accuracy of perception, or knowledge of knowing). Metacognition is however difficult to assess consistently. Under popular signal detection theory models for stimulus classification tasks, measures such as confidence-accuracy correlation, and type II d’, are highly sensitive to response biases in both the type I (classification) and type II (metacognitive) tasks.  Maniscalco and  Lau (2011; Cons. Cogn.) recently addressed this issue via a new measure: meta-d’. This is the type I d’ that would have led to the observed type II data had the subject used all the type I information. Trivially, meta-d’=d’ irrespective of response bias when type I and II decisions are based on the same Gaussian signal. However, its behaviour under more general and empirically plausible scenarios is unknown. Here, we describe a rigorous set of analytical and simulation results, leveraging new analytical formulae for meta-d’. We systematically analyse scenarios in which metacognitive judgments utilize enhanced or degraded versions of the type I signal, and when decision criteria are jittered. Analytically, meta-d’ values typically reflect the underlying model well, and are stable under changes in decision criteria; however, in extreme cases meta-d’ becomes unstable. Simulations of experiments indicate that data must meet certain criteria for meta-d’ to be numerically accurate and stable. Our results provide support for meta-d’ as a useful, stable measure of metacognition, and new rigorous methodology for its application.

AttachmentSize
adambarrettassc16.pdf1.28 MB