Week 10 Model Answer
ten Brinke et al (2014) propose the unconscious mind has the ability to detect lies. They reason that human deception and detection of deception have been in an evolutionary 'arms race', so that successful detection remains barely above chance (they cite a review which proposes that conscious lie detection is an average of 54% - compared to chance rate of 50%). Further, they review evidence - from studies of non-human primates, aphasics and indirect measures of deception - that suggest that the capacity to detect lies exists within our minds, and that conscious reasoning may actually interfere with the expression of this unconscious knowledge (for example because people rely on inaccurate beliefs about how liars behave).
In two experiments deception-detection was assessed using both explicit (verbal judgement of "truth" vs "lie") and implicit (behaviour reaction time measures in a task which doesn't directly ask for assessment of deception) measures. Targets for these judgements were videos of people involved in a "high stakes mock crime" scenario in which liars stood to win $100 if they convinced an interviewer that they hadn't taken the $100 that they were instructed to take from the interview room while unobserved (henceforth known as "pleaders").
Experiment 1 compared explicit judgements of 72 participants for 12 pleaders’ videos with reaction time differences on an Implicit Association Task (Greenwald, McGhee & Schwartz, 1998), a categorisation task which used pictures of the pleaders and "truth" vs "lie" categories.
Experiment 2 compared explicit judgements of 66 participants for the 12 pleaders with categorisation reaction times for truth/lie-related words which were preceded by subliminally presented pictures of the pleaders.
In Expt 1 explicit judgements were slightly below chance (p=0.53) (not significantly different from chance for detecting the truth, below chance for detecting lies). IAT scores were significantly different from 0, suggesting that speed of categorisation of the pleader faces was effected by their status of lying or telling the truth.
In Expt 2 explicit judgement were above chance (correct detection of truth-telling was above chance, correct detection of lying was below chance). Categorisation reaction times were significantly different for targets which were preceded by congruent primes compared to incongruent primes. Participants were slower to categorise the words if they were subliminally preceded by a pleader face which did not match the word category (e.g. the word "genuine" preceded by the face of someone who was lying).
CONCLUSIONS, IMPLICATIONS AND CRITICISMS
The study has strengths - the stimuli are prepared by asking people to tell real lies for high stakes (about £60). Although they do not discuss this, the possibility that lie detection may take time (and so later measures will be stronger) is addressed by counterbalancing order of explicit-implicit measures across experiments 1 and 2.
An issue which it would be nice to see addressed in the difference in detection rates for truth vs lies. In any signal detection task, you can alter responding by increasing discrimination, or altering bias. If people successfully label truth-telling as truth telling (but not lies and lying) could it be that they have a bias to assume most people are telling the truth, which is affecting responding in these experiments? It isn't clear from the analysis presented.
The result that people are at chance, or worse than chance, in explicit lie detection is in contrast to the literature (including that which is cited by ten Brinke and colleagues). They do not offer an explanation of why their procedure might generate this anomaly, but that does not necessarily invalidate their comparison of explicit vs implicit measures of lie detection.
More serious is the different nature of the explicit and implicit judgements. The explicit judgements are one-off and categorical. The implicit judgements are repeated and continuous (reaction time measures). It is not fair to say the unconscious lie detection is more accurate than conscious lie detection when different measures are used. For example, it may be that if a continuous measure of conscious lie detection had been elicited, such as confidence ratings of judgements, there would have been a significant difference between ratings of truth and lie stimuli. Additionally Franz and von Luxburg (2014) claim that evidence that some difference exists between truth and lie stimuli doesn't automatically mean that categorisation (whether explicit or implicit) could be successfully supported at above chance rates.
A direct comparison wasn't done between conscious and unconscious lie detection, so it isn't possible to form solid conclusions about the nature of 'unconscious lie detection'. Because some aspects of behaviour are influenced by truth/lie status doesn't mean that our unconscious in any integrated sense 'knows' when we are being lied to. The issue of why conscious lie detection is so poor isn't addressed by this study. Participants have access the full range of their conscious thoughts and inarticulate feelings, so it remains a conundrum why they are not able to use these feelings to guide explicit judgement.
ten Brinke, L., Stimson, D., & Carney, D. R. (2014). Some Evidence for Unconscious Lie Detection. Psychological science, 25(5), 1098-1105
Franz, V. H., & von Luxburg, U. (2014). Unconscious lie detection as an example of a widespread fallacy in the Neurosciences. arXiv preprint arXiv:1407.4240.
Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6), 1464-1480