Remarks at “Re-energising the narrative: human rights in the digital age”

Notes on my talk at Wilton Park’s “Re-energising the narrative: human rights in the digital age (WP1655)“. Wilton Park is an agency of the Foreign and Commonwealth Office which organises discussion events focussed on international security, prosperity and justice.

From the event outline:
“The event will consider the specific threats presented by abuse via social media platforms, the ‘echo chamber’ effect on critical thinking and policy making, and the deliberate exploitation of divisions in societies, eg computational propaganda/’fake news’, amplification by algorithms and systematised trolling.”

What does a psychologist have to offer here? I think the first thing is an apology, on behalf of my profession.

In our individualistic, narcissistic age, psychology is a growth area. Psychologists have been making hay from promoting the idea that people are irrational, that our thinking is riddled with systematic errors and delusions.

The title of popular science books are an excellent lens on this. Go to the psychology section and you’ll find books with titles like “You Are Not So Smart” by David McRaney, like “Predictably Irrational” by Dan Ariely. Both good books, but you see the theme.

Perhaps the most celebrated psychologist of recent years, Daniel Kahneman, whose work is foundational to behavioural economics, which led directly to the idea of nudge, wrote “Thinking Fast and Slow“, a book which describes our minds as divided, and often dominated by a fast, stupid, system which, in the words of one commentator portrays humans as basically “spending all their time failing”.

Who benefits from this? Well, obviously we, the psychologists do. If human reasoning was straightforward then we’d be out of work. But the emphasise psychologists have put on studies of reasoning is profoundly limited.

So as well as an apology, I want to offer you some advice about the limitations of this work. To do this, let’s pick an example from the experimental study of communication, a study from Stanford by Paul Thibodeau and Lera Bordidsky.

These two ran a study on perception of crime and crime control policy, where they asked participants to read a newspaper story about crime in a US small town.

Half the participants saw a version of the story where crime was described as a beast stalking the citizens, half saw a version where crime was described as a virus infecting the city.

Two version, two metaphors for crime. And then all participants were asked which policies they would support to deal with crime and their responses were recorded.

And this is how experimental psychology works – measurement (of people’s policy support) and comparison (which metaphor people read about in the news paper). Now the result isn’t so important but I’m sure you’ll want to know, and probably won’t be surprised by, the finding that people exposed to the beast metaphor offered more support for policies aimed at capture, enforcement and punishment: more police on the streets, longer prison sentences – and those exposed to the virus metaphor offered more support for policies aimed a diagnosis, treatment and inoculation – more education, fix the economy, resources to get kids out of gangs and so on.

But this study contains its own biases, and they are illustrative of the limitations of many such experiments.

First, because it works by comparing two conditions, it highlights the effect of the manipulation – the metaphor used, in this case – but at the cost of downplaying every other factor which influences people’s judgements.

The experiment allows you to see the difference the metaphor makes to people’s judgements, but renders the other reasons for invisible. And this common. Psychologists love experiments in which superficial changes creates differences between groups, but often we don’t put the size of those differences into context.

In this way, experiments on biases in perception and decision making – which psychologists love to run – tell a very dangerous half truth about human reasoning. They tell the story of how our judgements can be swayed by superficial or distracting factors – all true – but neglect the story of how we come to arrive at our beliefs in the first place, at the profound role reasoning and reflection play.

The second bias in many experiments on communication is that they almost invariably look at immediate effects. We ask people to take part in our experiment, and this typically involves the manipulation and the measure at the same time point. Almost nobody gets participants back to see how their beliefs have changed a week later, or a month, or a year. That’s too difficult.

So this creates another blindspot in our experimentally informed view of the world – we see things which have an immediate effect, which push our views and beliefs around: emotion, images, etc. But we’re blind to stuff which works its effects longer term. This matters, I argue, because one thing which has profound long terms effects are good reasons and moral values.

The current fashion for a psychology preoccupied with our biases and limitations underestimates the common inheritance we all have as reasoning and moral beings. Worse, by promoting a view of human nature as irrational, it panders to the view that the only way to persuade people is through cheap tricks which trigger biases. By acting as if this is true we risk making it so. If we believe that there’s no point arguing with some people – that they are irredeemably biased and irrational, beyond persuasion – we may abandon any attempt at persuasion by reasoned argument.

I’ve an optimistic faith in human rationality. We’re not perfect, but we can connect with people who disagree.

So the challenge is to communicate effectively, without giving into the very partial view of human nature that psychology can seem to promote.

There is better and worse communication, yes; there is messaging which evokes our biases and so is more likely to be rejected, and messaging which works with the grain of the way we reason.

George Lakoff is a psychologist who is well known for his work on metaphors and frames. Frames are the background ideas – metaphors – which determine the context for people’s reasoning. His claim is that frames can be used to control the contours of a political debate, most notably in determining what people try and refute, and that each refutation reinforces the frame of the idea. So, for example, there is tax relief, the term for tax cuts promoted by US Republicans, which smuggles in the metaphor of tax as burden. So, Lakoff says, whether you are arguing for or against any particular case of tax relief you have conceded the general idea that tax is a burden, and we all know that, ultimately, burdens should be lifted.

A criticism of Lakoff is it opens the door to a sort of arms race where everyone tries to weaponise their language for maximum advantage. And maybe that would be true if we thought of framing as a cheap trick, a surface property which we could add to any messaging after we had already determined what we wanted to say. I’d argue a better understanding of Lakoff’s framing is that it gives a way to connect our message to our common values, to share those values in way that connects with understandings that our audience already have.

An important part of Lakoff’s book about political framing is the recognition that American conservatives have long recognised the importance of ideas, and funded institutions – from think tanks to talk radio – which seed the frames in minds of voters which political messages later target and exploit. Framing, in this view, is no surface property, but a way of quickly connecting to a deep history of ideas, values and community building.

To finish, I’d like to end on a positive example of framing from the city where I live and work. City of Sanctuary, is a network of local organisations, which started in Sheffield, which has the aim of creating a culture of hospitality for those fleeing violence and persecution.

Notice the framing, and how it differs from the dominant metaphors surrounding asylum and immigration in the UK. That debate is so toxic that the phrase asylum seeker seems to come with a silent “bogus” at the front, and immigrant with a silent “illegal”. You could try and counter this with myth busting – show the statistics that most immigration is legal, explain the legitimate reasons for seeking asylum, but you’d be playing into the trap Lakoff outlines of reinforcing the frame of migration as illegitimate and suspect in general, even as you try and rebut it in the particulars.

City of Sanctuary sidesteps that and harks to a fundamental idea that we all recognise – of the sacredness of sanctuary, of protection for those who need it. It asks us to think about the duties of hosts, of those fortunate to have shelter, to share it with those in need. It’s a brilliant bit of framing, and not superficial trick. In allows, in a few sentences, the fundamental values of an organisation to be summed up and communicated.

So, in conclusion, remember that the evidence on the psychology of communication often disguises as much as it reveals, that it has a bias toward showing the immediate influence of surface changes, rather than the enduring power of reasons, arguments and values. There are ways to connect our deep principles with persuasive messages, and I’m looking forward to discussing the details of that with you over the next few days.

This is more or less what I said at Wilton Park, 14 January 2019. For more on the counter-literature in psychology which shows the power of reasoned argument, see my ‘For argument’s sake: evidence that reason can change minds‘. For a profound recent account of the psychology of human reasoning see “The Enigma of Reason: A New Theory of Human Understanding”, by Hugo Mercier and Dan Sperber (my review of this book here).

The Choice Engine

How and why do we choose? Are our choices free, or determined by our past, our brains or our environment? Are our choices ours? The Choice Engine is an interactive essay which unfolds according to what you choose to read about next.

Experience it by visiting @ChoiceEngine on Twitter.

We’ll be discussing the project and the ideas behind it as part of the Festival of the Mind at 4pm on the 25th of September in the Speigeltent, Barker’s Pool. This event brings together the team behind the Choice Engine and scholars of choice from psychology, neuroscience and the arts to discuss choice and free will.


– Jon Cannon, Designer

– Tom Stafford, Department of Psychology

– Helena Ifill, School of English

And the chance for audience questions and interventions

This event is free, all welcome

Symposium on Robust Research Practices

Mate Gyurkovics has organised a Symposium on Robust Research Practices at the University of Sheffied on 7th of June 2018. There is a fantastic speaker line up and you can register to attend (for free!) using this link :

Topics will include open science as a measure to include quality control; the advantages of registered reports and pre-prints, and statistical issues (e.g., concerning the p-value) and potential alternatives.

Speakers: Dr Marcus Munafo (Bristol), Dr Chris Chambers (Cardiff), Dr Kate Button (Bath), Dr Hannah Hobson (Greenwich), Dr Verena Heise (Oxford), and Dr Lewis Halsey (Roehampton).

Date: Thursday, 7th June, 2018

Time: 10:30 to 17:00.

Venue: The Diamond, LT 8, University of Sheffield

Update: materials from the symposium now available here

Funded PhD studentship

Funding is available for a PhD studentship in my department, based around a teaching fellowship. This means you’d get four years of funding but would be expected to help teach during your PhD.

Relevant suitability criteria include:

  • Being ready to start on 5th of February
  • Having completed an MSc with a Merit or Distinction
  • EU citizen
  • Background in psychology

Projects I’d like to supervise are here, including:

Analysing Big Data to understand learning (like this)

Online discussion: augmenting argumentation with chatbots (with Andreas Vlachos in Computer Science)

Improving skill learning (theory informed experiments!)

A PhD with me will involve using robust and open science methods to address theoretical ideas in cognitive science. Plus extensive mentoring on all aspects of the scholarly life, conducted in Sheffield’s best coffee shops.

Full details of the opportunity here. Deadline: 18th December. Get in touch!

Seminar: Framing Effects in the Field: Evidence from Two Million Bets

Seminar announcement

Framing Effects in the Field: Evidence from Two Million Bets

Friday 8th of December, 1pm, The Diamond LT2

Alasdair Brown, School of Economics, UEA

Abstract: Psychologists and economists have often found that risky choices can be affected by the way that the gamble is presented or framed.  We analyse two million tennis bets over a 6 year period to analyse 1) whether frames are important in a real high-stakes environment, and 2) whether individuals pay a premium in order to avoid certain frames.  In this betting market, the same asset can be traded at two different prices at precisely the same time.  The only difference is the way that the two bets are framed.  The fact that these isomorphic bets arise naturally allows us to examine a scale of activity beyond even the most well-funded experiments.  We find that bettors make frequent mistakes, choosing the worse of the two bets in 29% of cases.  Bettors display a (costly) aversion to the framing of bets as high risk, but there is little evidence of loss aversion.  This suggests that individuals are indeed susceptible to framing manipulations in real-world situations, but not in the way predicted by prospect theory.

Part of the Psychology department seminar series. Tom Stafford is the host.

Please contact me if you’d like to meet with Alasdair.

2016 review

Research. Theme #1: Decision making: Most of the work I’ve done this year hasn’t yet seen the light of day. Our Michael J Fox Foundation funded project using typing as a measure of the strength of habitual behaviour in Parkinson’s Disease continues, and we’ll finish the data analysis next month. Likewise, we should also soon finish the analysis on our project ‘Neuroimaging as a marker of Attention Deficit Hyperactivity Disorder (ADHD)’. Angelo successfully passed his viva (thesis title: “Decision modelling insights in cognition and adaptive decision making”) and takes up a fellowship at Peking University in 2017 (well done Angelo!).

This thread of work, which is concerned with the neural and mechanistic basis of decision making, informs the ‘higher-level’ work I do on decision making, which is preoccupied with bias in decision making and how to address it. This work, done with Jules Holroyd and Robin Scaife, has focussed on the idea of ‘implicit bias‘, and what might be done about it. As well as running experiments and doing conceptual analysis, we’ve been developing an intervention on cognitive and implicit bias, which summarises the current state of research and gives some practical advice on avoiding bias in decision making. I’ve done a number of these sessions with judges, which has been a humbling experience: to merely study decision making and then be confronted with a room of professionals who dedicate their time to actually making fair decisions. As with the other projects, much more on this work will hopefully see the light in 2017.

World events have made studying decision making to understand better decisions seem more and more relevant. Here’s a re-analysis of some older data which I completed following the UK’s referendum on leaving the EU in June: Why don’t we trust the experts? (and, relatedly, my thoughts on being a European scholar). Also on this topic, a piece for The ConversationHow to check if you’re in a news echo chamber – and what to do about it.

Journal publications on decision making:

Holroyd, J., Scaife, R., Stafford, T. (in press). Responsibility for Implicit Bias. Philosophy Compass.
Pirrone, A., Azab, H., Hayden, B.Y., Stafford, T. and Marshall, J.A.R. (in press). Evidence for the speed-value trade-off: human and monkey decision making is magnitude sensitive. Decision
Panagiotidi, M., Overton, P.G., Stafford, T. (in press). Attention Deficit Hyperactivity Disorder-like traits and distractibility in the visual periphery. Perception.
Pirrone, A., Dickinson, A., Gomez, R., Stafford, T. and Milne, E. (in press). Understanding perceptual judgement in autism spectrum disorder using the drift diffusion model. Neuropsychology.
Bednark J., Reynolds J., Stafford T., Redgrave P. and Franz E. (2016). Action experience and action discovery in medicated individuals with Parkinson’s disease. Frontiers in Human Neuroscience, 10, 427. DOI 10.3389/fnhum.2016.00427.
Lu, Y., Stafford, T., & Fox, C. (2016). Maximum saliency bias in binocular fusion. Connection Science, 28(3),258-269.

(catch up on all publications on my scholarly publications page)


Research. Theme #2: Skill and learning

My argument is that games provide a unique data set where participants engage in profound skill acquisition AND the complete history of their skill development is easily recorded. To this end, I’ve several projects analysing data from games. This new paper : Stafford, T. & Haasnoot, E. (in press). Testing sleep consolidation in skill learning: a field study using an online game. Topics in Cognitive Science. (data + code) is an example of the new kinds of analysis – as well as the new results – which large data from games allow. The paper is an advance on our first work on this data (Stafford & Dewar, 2014), and is a featured project at the Centre for Data on the Mind. I gave a talk about this work at a workshop ‘Innovations in online learning environments: intrapersonal perspectives‘, for which there is video (view here: Factors influencing optimal skill learning: data from a simple online game).

I have been analysing a large dataset of chess games (11 million + games) and presented initial work on this at the Cognitive Science Conference. You can read the paper or see the code, results and commentary in an integrated Jupyter notebook (these are the future). There’s lots more exciting stuff to come out of this data!

Our overview of how the science of skill acquisition can inform development of sensory protheses came out: Bertram, C., & Stafford, T. (2016). Improving training for sensory augmentation using the science of expertise. Neuroscience & Biobehavioral Reviews, 68, 234-244 (Talk slides, lay summary).

Also: I wrote for The Conversation about an important review of the literature on the benefits of Brain Training, and I had a great summer student looking at the expertise acquired by Candy Crush players.


Teaching & thinking about teaching: Not as much to report as last year, since I had teaching leave for the autumn semester, as part of our Leverhulme project on bias and blame. At the beginning of the year I taught a graduate discussion class on dual-process theories in psychology and neuroscience, which was very worthwhile, but didn’t leave much digital trace. Whilst I’ve not been teaching classes, I have been thinking about teaching, publishing this in The Guardian: The way you’re revising may let you down in exams – and here’s why (my third piece in the G on learning), this on NPJ ‘Science of Learning’ Community: Do students know what’s good for them? (I’m proud of this one, mainly for the quality of the outgoing links it includes), and this, for The Conversation, on a under-noted consequence of testing in education: Good tests make children fail – here’s why.

I also used some informal platforms (i.e. blogging etc) to produce some guidance for psychology students: This on what I call the Hierarchy of critique, and this on the logic of student experiment reports, and I tried to provoke some discussion around this : I don’t read students’ drafts. Should I?

I did some talks for graduate students (follow the links for slides): Adventures in research blogging and Expanding your writing portfolio.


Peer reviewing: I feel this should be recorded somewhere, since peer reviewing is a part of an academic’s job which requires the pinnacle of their expertise and experience, yet is generally unrecognised and unrewarded. This year I helped the scholarly community out by doing grant reviews for the Medical Research Council and the Biotechnology and Biological Sciences Research Council and manuscript reviews for Trends in Cognitive Sciences, Memory and Cognition, Connection Science, Canadian Journal of Philosophy, Journal of European Psychology Students, International Journal of Communication and the Annual Cognitive Science Society conference. From 1st of January I will only be reviewing papers which make their data freely available, as part of the Peer Reviewers’ Openness Initiative.


That’s mostly it, bar a few things I couldn’t fit under these four headlines. Thanks to everyone who helped with the work in 2016 – getting to talk, write and pursue ideas with sincere, intelligent, kind and interesting people is the best part of the job.

(Previously: 2015 review)

internship: Public Engagement Coordinator

If you are a recent graduate of the University of Sheffield, then you can apply for this paid internship as Public Engagement Coordinator, working with me in the Department of Psychology. Here’s a bit about what we want to do:

Help the Department of Psychology engage with the public. Our vision is to arrange, promote, run and record a series of
“TED”-style talks for Psychology at Sheffield. These will be our chance to reach hundreds of college age students – both those in our majority recruitment demographic and those from under-represented backgrounds.

And here’s a bit about who we’re looking for:

The ideal candidate will be enthusiastic for what Universities can offer society, and vice versa. You will have an appreciation of the concerns of applicants to the University – especially those from “widening participation” backgrounds – and be capable of keeping track of a complex set of tasks. In this internship you will learn to organise and promote large events, to put scholarship in a wider context and see how issues in people’s everyday lives connect to the work we do in the Department of Psychology. You will practice writing in an engaging and accessible way and get to work with people across the University and the region.

It’s six months, full time, paid. Here are links for the overview and job description, but to apply you need to go to and enter reference UOS014640. To be eligible you need to have graduated from a University of Sheffield undergraduate degree in 2016. Closing date: 4th of November 2016

Any questions, feel free to get in touch with me

CogSci @ Sheffield

This mailing list: CogSci at Sheffield supports the ad hoc network of researchers at the University of Sheffield who are interested in Cognitive Science. You can sign yourself up and receive notifications about events happening across the University (but mostly emanating from Psychology, Philosophy, Linguistics and Computer Science).

We are European Scholars

I am British, but consider myself a European scholar. At the start of my time as a lecturer at the University I was lucky enough to be part of an EU funded project on learning in robots. With that project I worked with brilliant colleagues around the EU, as well as being able to do the piece of work I regard as my single most important scientific contribution. It was EU projects like this which inspired the foundation of Sheffield Robotics, a collaboration between the two Universities in Sheffield which aims to define the future of a technology vital to manufacturing in the UK. Two British PhD students I supervised during this project went on to start a business, and a third – from Cyprus – did work with me that led to a major grant from the Michael J Fox Foundation for research into Parkinson’s Disease – bringing US funding into the UK to allow me to work with colleagues in Spain and the US on a health issue that will affect 1 in 500 of us: over 120,000 people in the UK.

Since then I have had two more brilliant PhD students from the EU. One, from Greece, completed a PhD on differences in sensory processing in ADHD and has since gone to work in industry, applying her research skills for a company based in Manchester. The other, an Italian, is currently writing up, and considering job opportunities from around the world. My hope is that we’ll be able to keep him in the UK, where he’ll be able to continue to contribute to the research environment that make British Universities the best in the world.

The UK needs Universities to train our young people, to contribute to public life and to investigate the world around us and within us. And the UK’s Universities need Europe.

I am a European scholar. We are European Scholars at the University of Sheffield. Without our European links and colleagues we, and the UK, would be immeasurably impoverished.

Written in support of yesterday’s call by the University of Sheffield’s Vice-Chancellor

Dangers and advantages in the idea of implicit bias

Last night I was on a panel discussion around the theme of “Success: is it all in the mind? A discussion on women in science.”

On that panel we discussed the idea of implicit bias, that we can behave in ways that are prejudiced even if we believe ourselves to be without prejudice (or even, anti-prejudice). Relevant examples might be: in meetings interrupting women more than men, filling departmental seminar or conference keynote slots with men rather than women, rating CVs which come from women as less employable and deserving less salary and so on.

The idea of implicit bias has both benefits and dangers for how we talk about bias. On the positive side, it gives us a mechanism for thinking about discrimination which isn’t about straightforward explicit prejudice. Sure, there are people who think “Women can’t do physics” or “Women shouldn’t work”, but implicit bias lets us talk about more subtle prejudice, it helps make visible the intangible feeling that, in a thousand different ways, life is different for members of different social groups. Relatedly, implicit bias lets us recognise that the line of division cuts through every one of us. It isn’t a matter of dividing the world into the sexists vs the feminists, say. Rather, because we’re all brought up in a world which discriminates against women we acquire certain gender-prejudiced habits of thought. Even if that only means automatically of a man when asked to imagine a scientist, then that can have accumulating effects for women in science. Finally, thinking about implicit bias gives a handle on what it might mean for an institution or a culture to be prejudiced. Again, without the need to identify individuals, implicit bias can help us talk about the ways in which we participate, or our organisation participates, in perpetuating discrimination. Nobody has to want people who are more likely to have childcare commitments to be excluded, but if your departmental meetings are always at 4pm then you are risking excluding them.

But the idea of implicit bias can have a negative influence as well. We live in an age which is fascinated by the individual and the psychological. Just because implicit biases can be measured in individuals’ behaviour doesn’t mean that all problems of discrimination should be addressed at the psychological level of individuals. If one thing is clear about implicit bias it is that the best approaches to addressing it won’t be psychological. This is a collective project, there is little or no evidence that ‘retraining’ individual’s implicit biases works, and raising awareness, whilst important, doesn’t provide a simple cure. Approaching bias at an institutional or inter-personal level is more likely to be effective – things like tracking the outcomes of hiring decisions or anonymised marking have been shown to be effective for mitigating bias or insulating individuals from the possibility of bias.

Secondly, the way people talk about bias evokes a metaphor of our rational versus irrational selves which owes more to religion than it does to science. Implicit biases are often described as unconscious biases, when the meaning of unconscious is unclear, and there’s plenty of evidence that people are aware, in some ways, of their biases and/or able to intervene in their expression. By describing bias as ‘unconscious’ we risk thinking of these biases as essentially mysterious -unknowable and unalterable (and from there the natural thought is, well there’s nothing I can do about them). My argument is that biases are not some unconscious, extraneous, process polluting our thinking. Rather, they are constitutive of our thinking – you can’t think without assumptions and shortcuts. And assumptions and shortcuts, while essential, also create systematic distortions in the conclusions you come to.

The idea of implicit bias helps us see prejudice in unexpected places – including our own behaviour. It sets our expectations that there will be no magic bullet for addressing bias, and progress will probably be slow, because cultural change is slow. These are the good things about thinking about the psychology of bias, but although the psychological mechanisms of bias are fascinating, we must recognise the limitations of only thinking about individuals and individual psychology when trying to deal with prejudice, especially when that prejudice is embedded in far wider historical, social and economic injustices. Nor should we allow the rhetoric of biases being ‘unconscious’ trick us into thinking that bias is unknowable or unaccountable for. There is no single thing to be done about discrimination, but things can be done.

Links & Endnotes:

The event was “Success: is it all in the mind? A discussion on women in science.”, organised by Cavendish Inspiring Women the other panelists were the Jessica Wade , Athene Donald and Michelle Ryan. My thanks to all the organisers and our chair, Stuart Higgins.

My thinking about bias is funded by the Leverhulme Trust, on a project led by Jules Holroyd. All my thinking about this has benefited from extensive discussion with her, and with the other members of that project (Robin Scaife, Andreas Bunge).

A previous post of mine about bias mitigation, which arose from doing training on bias with employment tribunal judges

A great book review: What Works: Gender Equality by Design, by Iris Bohnet, which says many sensible things but which risks describing bias as unconscious and therefore more mysterious and intractable than it really is

A good example of the risk of ‘psychologising’ bias: there are more police killings of blacks than whites in the US, but that may reflect other injustices in society rather than straightforward racist biases in police decisions to shoot (and even if it did, it isn’t clear that the solutions would be to target individual officers). See also ‘Implicit Bias Training for Police May Help, but It’s Not Enough‘.

A great discussion of the Williams and Ceci (2015) claim that “sexism in science is over”, and also here . See also ‘How have gender stereotypes changed in the last 30 years?’.