How can we know when we know we know? Towards measuring metacognition
Normal 0 false false false EN-GB ZH-CN HE
A major goal in consciousness science is to discriminate between unconscious and conscious processes. Behaviourally, conscious cognition can be inferred by measuring metacognition, (i.e. knowledge of accuracy of perception, or knowledge of knowing). Metacognition is however difficult to assess consistently. Under popular signal detection theory models for stimulus classification tasks, measures such as confidence-accuracy correlation, and type II d’, are highly sensitive to response biases in both the type I (classification) and type II (metacognitive) tasks. Maniscalco and Lau (2011; Cons. Cogn.) recently addressed this issue via a new measure: meta-d’. This is the type I d’ that would have led to the observed type II data had the subject used all the type I information. Trivially, meta-d’=d’ irrespective of response bias when type I and II decisions are based on the same Gaussian signal. However, its behaviour under more general and empirically plausible scenarios is unknown. Here, we describe a rigorous set of analytical and simulation results, leveraging new analytical formulae for meta-d’. We systematically analyse scenarios in which metacognitive judgments utilize enhanced or degraded versions of the type I signal, and when decision criteria are jittered. Analytically, meta-d’ values typically reflect the underlying model well, and are stable under changes in decision criteria; however, in extreme cases meta-d’ becomes unstable. Simulations of experiments indicate that data must meet certain criteria for meta-d’ to be numerically accurate and stable. Our results provide support for meta-d’ as a useful, stable measure of metacognition, and new rigorous methodology for its application.