Research Ambitions

I study learning and decision making. Much of my research looks at simple decision making, and simple skill learning, using measures of behaviour informed by work done in computational theory, robotics and neuroscience. More recently a strand of my research looks at complex decisions, and bias in decision making, and what might be called ‘evidence-informed persuasion’.

Three core ambitions of my research are:

  • Data Intensive Methods – robust, scalable, reproducible experiments and analysis which are transparent, sharable and work as well with 400,000 data points as they do with 40.
  • Interdisciplinarity – collaborating across all scholarly fields.
  • Public Engagement – listening to public interests, sharing research process and outcomes with non-specialists, giving back to the publics involved with research.

For details see these funded research projects, or these scholarly publications (also available from the tabs above). Or scroll down for latest news and thoughts.

Written by Comments Off on Research Ambitions Posted in Research

Remarks at “Re-energising the narrative: human rights in the digital age”

Notes on my talk at Wilton Park’s “Re-energising the narrative: human rights in the digital age (WP1655)“. Wilton Park is an agency of the Foreign and Commonwealth Office which organises discussion events focussed on international security, prosperity and justice.

From the event outline:
“The event will consider the specific threats presented by abuse via social media platforms, the ‘echo chamber’ effect on critical thinking and policy making, and the deliberate exploitation of divisions in societies, eg computational propaganda/’fake news’, amplification by algorithms and systematised trolling.”

What does a psychologist have to offer here? I think the first thing is an apology, on behalf of my profession.

In our individualistic, narcissistic age, psychology is a growth area. Psychologists have been making hay from promoting the idea that people are irrational, that our thinking is riddled with systematic errors and delusions.

The title of popular science books are an excellent lens on this. Go to the psychology section and you’ll find books with titles like “You Are Not So Smart” by David McRaney, like “Predictably Irrational” by Dan Ariely. Both good books, but you see the theme.

Perhaps the most celebrated psychologist of recent years, Daniel Kahneman, whose work is foundational to behavioural economics, which led directly to the idea of nudge, wrote “Thinking Fast and Slow“, a book which describes our minds as divided, and often dominated by a fast, stupid, system which, in the words of one commentator portrays humans as basically “spending all their time failing”.

Who benefits from this? Well, obviously we, the psychologists do. If human reasoning was straightforward then we’d be out of work. But the emphasise psychologists have put on studies of reasoning is profoundly limited.

So as well as an apology, I want to offer you some advice about the limitations of this work. To do this, let’s pick an example from the experimental study of communication, a study from Stanford by Paul Thibodeau and Lera Bordidsky.

These two ran a study on perception of crime and crime control policy, where they asked participants to read a newspaper story about crime in a US small town.

Half the participants saw a version of the story where crime was described as a beast stalking the citizens, half saw a version where crime was described as a virus infecting the city.

Two version, two metaphors for crime. And then all participants were asked which policies they would support to deal with crime and their responses were recorded.

And this is how experimental psychology works – measurement (of people’s policy support) and comparison (which metaphor people read about in the news paper). Now the result isn’t so important but I’m sure you’ll want to know, and probably won’t be surprised by, the finding that people exposed to the beast metaphor offered more support for policies aimed at capture, enforcement and punishment: more police on the streets, longer prison sentences – and those exposed to the virus metaphor offered more support for policies aimed a diagnosis, treatment and inoculation – more education, fix the economy, resources to get kids out of gangs and so on.

But this study contains its own biases, and they are illustrative of the limitations of many such experiments.

First, because it works by comparing two conditions it highlight the effect of the manipulation – the metaphor used, in this case – but at the cost of downplaying every other factor which influences people’s judgements.

The experiment allows you to see the difference the metaphor makes to people’s judgements, but renders the other reasons for invisible. And this common. Psychologists love experiments in which superficial changes creates differences between groups, but often we don’t put the size of those differences into context.

In this way, experiments on biases in perception and decision making – which psychologists love to run – tell a very dangerous half truth about human reasoning. They tell the story of how our judgements can be swayed by superficial or distracting factors – all true – but neglect the story of how we come to arrive at our beliefs in the first place, at the profound role reasoning and reflection play.

The second bias in many experiments on communication is that they almost invariably look at immediate effects. We ask people to take part in our experiment, and this typically involves the manipulation and the measure at the same time point. Almost nobody gets participants back to see how their beliefs have changed a week later, or a month, or a year. That’s too difficult.

So this creates another blindspot in our experimentally informed view of the world – we see things which have an immediate effect, which push our views and beliefs around: emotion, images, etc. But we’re blind to stuff which works its effects longer term. This matters, I argue, because one thing which has profound long terms effects are good reasons and moral values.

The current fashion for a psychology preoccupied with our biases and limitations underestimates the common inheritance we all have as reasoning and moral beings. Worse, by promoting a view of human nature as irrational, it panders to the view that the only way to persuade people is through cheap tricks which trigger biases. By acting as if this is true we risk making it so. If we believe that there’s no point arguing with some people – that they are irredeemably biased and irrational, beyond persuasion – we may abandon any attempt at persuasion by reasoned argument.

I’ve an optimistic faith in human rationality. We’re not perfect, but we can connect with people who disagree.

So the challenge is to communicate effectively, without giving into the very partial view of human nature that psychology can seem to promote.

There is better and worse communication, yes; there is messaging which evokes our biases and so is more likely to be rejected, and messaging which works with the grain of the way we reason.

George Lakoff is a psychologist who is well known for his work on metaphors and frames. Frames are the background ideas – metaphors – which determine the context for people’s reasoning. His claim is that frames can be used to control the contours of a political debate, most notably in determining what people try and refute, and that each refutation reinforces the frame of the idea. So, for example, there is tax relief, the term for tax cuts promted by US Republicans, which smuggles in the metaphor of tax as burden. So, Lakoff says, whether you are arguing for or against any particular case of tax relief you have conceeded the general idea that tax is a burden, and we all know that, ultimately, burdens should be lifted.

A criticism of Lakoff is it opens the door to a sort of arms race where everyone tries to weaponise their language for maximum advantage. And maybe that would be true if we thought of framing as a cheap trick, a surface property which we could add to any messaging after we had already determined what we wanted to say. I’d argue a better understanding of Lakoff’s framing is that it gives a way to connect our message to our common values, to share those values in way that connects with understandings that our audience already have.

An important part of Lakoff’s book about political framing is the recognition that American conservatives have long recognised the importance of ideas, and funded institutions – from think tanks to talk radio – which seed the frames in minds of voters which political messages later target and exploit. Framing, in this view, is no surface property, but a way of quickly connecting to a deep history of ideas, values and community building.

To finish, I’d like to end on a positive example of framing from the city where I live and work. City of Sanctuary, is a network of local organisations, which started in Sheffield, which has the aim of creating a culture of hospitality for those fleeing violence and persecution.

Notice the framing, and how it differs from the dominant metaphors surrounding asylum and immigration in the UK. That debate is so toxic that the phrase asylum seeker seems to come with a silent “bogus” at the front, and immigrant with a silent “illegal”. You could try and counter this with myth busting – show the statistics that most immigration is legal, explain the legitimate reasons for seeking asylum, but you’d be playing into the trap Lakoff outlines of reinforcing the frame of migration as illegitimate and suspect in general, even as you try and rebut it in the particulars.

City of Sanctuary sidesteps that and harks to a fundamental idea that we all recognise – of the sacredness of sanctuary, of protection for those who need it. It asks us to think about the duties of hosts, of those fortunate to have shelter, to share it with those in need. It’s a brilliant bit of framing, and not superficial trick. In allows, in a few sentences, the fundamental values of an organisation to be summed up and communicated.

So, in conclusion, remember that the evidence on the psychology of communication often disguises as much as it reveals, that it has a bias toward showing the immediate influence of surface changes, rather than the enduring power of reasons, arguments and values. There are ways to connect our deep principles with persuasive messages, and I’m looking forward to discussing the details of that with you over the next few days.

This is more or less what I said at Wilton Park, 14 January 2018. For more on the counter-literature in psychology which shows the power of reasoned argument, see my ‘For argument’s sake: evidence that reason can change minds. For a profound recent account of the psychology of human reasoning see “The Enigma of Reason: A New Theory of Human Understanding”, by Hugo Mercier and Dan Sperber (my review of this book here).

Written by Comments Off on Remarks at “Re-energising the narrative: human rights in the digital age” Posted in Events

The Choice Engine

How and why do we choose? Are our choices free, or determined by our past, our brains or our environment? Are our choices ours? The Choice Engine is an interactive essay which unfolds according to what you choose to read about next.

Experience it by visiting @ChoiceEngine on Twitter.

We’ll be discussing the project and the ideas behind it as part of the Festival of the Mind at 4pm on the 25th of September in the Speigeltent, Barker’s Pool. This event brings together the team behind the Choice Engine and scholars of choice from psychology, neuroscience and the arts to discuss choice and free will.


– Jon Cannon, Designer

– Tom Stafford, Department of Psychology

– Helena Ifill, School of English

And the chance for audience questions and interventions

This event is free, all welcome

Written by Comments Off on The Choice Engine Posted in Events


Open science essentials in 2 minutes, part 4

Before a research article is published in a journal you can make it freely available for anyone to read. You could do this on your own website, but you can also do it on a preprint server, such as, where other researchers also share their preprints. Psyarxiv allows others to find your research easily, and is supported by the OSF, which means it has the support to hold your papers for the long term.

Preprint servers have been used for decades in physics, but are now becoming more common across academia. Preprints allow rapid dissemination (and citation) of your research, which is especially important for early career researchers. Preprints can be cited and indexing services like Google Scholar will join your preprint citations with the record of your eventual journal publication.

Preprints also mean that work can be reviewed (and errors-caught) before final publication.

What happens when my paper is published?

Your work is still available in preprint form, which means that there is a non-paywalled version and so more people will read and cite it. If you upload a version of the manuscript after it has been accepted for publication that is called a post-print.

What about copyright?

Mostly journals own the formatted, typeset version of your published manuscript. This is why you often aren’t allowed to upload the PDF of this to your own website or a preprint server, but there’s nothing stopping you uploading a version with the same text (so the formatting will be different, but the information is the same).

Will journals refuse my paper if it is already “published” via a preprint?

Most journals allow, or even encourage preprints. A diminishing minority don’t. If you’re interested you can search for specific journal policies here.

Will I get scooped?

Preprints allow you to timestamp your work before publication, so they can act to establish priority on a findings which is protection against being scooped. Of course, if you have a project where you don’t want to let anyone know you are working in that area until you’re published, preprints may not be suitable.

When should I upload a preprint?

Upload a preprint at the point of submission to a journal, and for each further submission and upon acceptance (making it a postprint).

What’s to stop people uploading rubbish to a preprint server?

There’s nothing to stop this, but since your reputation for doing quality work is one of the most important things a scholar has I don’t recommend it.

Useful advice:

Put clear headers in your pre-print, noting the version and/or data, and the status of the pre-print (e.g. under review or published). When you are published you can upload a version with a header saying “Please cite as [xxxxxxx]”).

Useful links:

Part of a series:

  1. Pre-registration
  2. The Open Science Framework
  3. Reproducibility

Cross-posted at

Written by Comments Off on Preprints Posted in Projects

Quit while you’re ahead: a surprising interaction between game performance and motivation

Over the last nine months I’ve been lucky enough to work with Dagmar Adamcová, who has been at the University of Sheffield on an Erasmus scheme internship during her MSc studies at Masaryk University in the Czech Republic.

Dagmar’s project focused on investigating an oddity in player behaviour from a simple online game I have data for: in this game the players tended to quit on a high score. The game is Axon, which I’ve published papers on previously, showing patterns in how people get better at the game with practice. Normally I average over players who play different numbers of games, and in the average of performance against play attempt we see the typical learning curve (people get better quickly at first, then the rate of increase slows down).

The odd pattern of behaviour which Dagmar looked at can be seen clearly if we divide players into subgroups who play exactly the same number of times, and plot their average performance:

(graph from Dagmar’s report)

As you can see, this isn’t a typical smooth learning curve. Players’ average performance *leaps* on their last game. What’s going on? Well Dagmar set out to investigate, and has published her analysis as a Jupyter notebook showing the analysis code, the results and the explanation of what she did.

What she found was evidence that unusually high scores let you predict games on which players will quit. Further, she found that predicting when players will quit is enhanced if you include a psychological definition of ‘high score’. Specifically, the ratio of any particular players latest score to their previous best allows better predictions of when they will quit.

The result is surprising because we normally assume that players of games like to win (and indeed, if success is rewarding we would normally predict that failure, not success, would lead to quitting). My theory is that players are “managing their hedonic experience”, or – as you might say in plain English – quitting while they are ahead.

We’d be interested to hear from anyone who has data which shows a similar interaction between performance and motivation. If you’ve seen a similar thing, please get in touch.

Read Dagmar’s full analysis in her notebook: Quit while you’re ahead: a surprising interaction between game performance and motivation.

Written by Comments Off on Quit while you’re ahead: a surprising interaction between game performance and motivation Posted in Projects

Teaching: “How reliable is cognitive neuroscience?”

This spring I taught my MSc module ‘PSY6316 Current Issues in Cognitive Neuroscience’ on the topic “How reliable is cognitive neuroscience?”. Here’s the module outline:

What has been called The Replication Crisis has sparked widespread introspection about the standards and protocols of science, particularly within the behavioural sciences. This course, though reading a series of landmark papers and class discussion, will consider to extent to which doubts about the reliability of empirical work affect cognitive neuroscience. Can we trust the published papers in this field? Are the effects which we investigate reliable? If not, how can work in cognitive neuroscience be made more trustworthy?

The basic idea was to read material on robust science and scandals of unreliablity in psychology, and ask the students to consider the extent these applied to cognitive neuroscience.

If asked students before they took the course, and after, a set of questions by anonymous questionnaire. The responses indicate that the course did at least induce some skepticism in the students:

Here are their before-vs-after responses to the statement ‘If you read about a finding that has been demonstrated across multiple papers in multiple journals by multiple authors, how likely do you think that finding is to be reliable?’

Here the responses for ”If PSYCHOLOGY continues as it has, significant progress will be made in understanding in the next 50 years’  and ‘If COGNITIVE NEUROSCIENCE continues as it has, significant progress will be made in understanding in the next 50 years’.

Note that optimism is reduced for both fields, but started higher for cognitive neuroscience (perhaps unsurprising since many of the students are on the cogneuro MSc).

The full list of questions I asked, the responses and the plots are available here. Most importantly maybe, the reading list is also available, which contains landmark papers on replicability/reproducibility in psychology, as well as relevant readings concerning reliability in neuroimaging.

I have always run this course as a discussion class rather than lecture class, and I have always based it around controversies in cognitive neuroscience. Last year it was ‘sex differences in the brain‘. You can read a bit more about the thinking behind the course in

Stafford, T. (2008). A fire to be lighted: a case-study in enquiry based learning. Practice and Evidence of the Scholarship of Teaching and Learning in Higher Education, 3(1), 20-42.

Related: materials from the “Open Science and Robust Research Practices” symposium held in Sheffield on 7/6/18.


Written by Comments Off on Teaching: “How reliable is cognitive neuroscience?” Posted in Teaching

Symposium on Robust Research Practices

Mate Gyurkovics has organised a Symposium on Robust Research Practices at the University of Sheffied on 7th of June 2018. There is a fantastic speaker line up and you can register to attend (for free!) using this link :

Topics will include open science as a measure to include quality control; the advantages of registered reports and pre-prints, and statistical issues (e.g., concerning the p-value) and potential alternatives.

Speakers: Dr Marcus Munafo (Bristol), Dr Chris Chambers (Cardiff), Dr Kate Button (Bath), Dr Hannah Hobson (Greenwich), Dr Verena Heise (Oxford), and Dr Lewis Halsey (Roehampton).

Date: Thursday, 7th June, 2018

Time: 10:30 to 17:00.

Venue: The Diamond, LT 8, University of Sheffield

Update: materials from the symposium now available here

Written by Comments Off on Symposium on Robust Research Practices Posted in Events


Open science essentials in 2 minutes, part 3

Let’s define it this way: reproducibility is when your experiment or data analysis can be reliably repeated. It isn’t replicability, which we can define as reproducing an experiment and subsequent analysis and getting qualitatively similar results with the new data. (These aren’t universally accepted definitions, but they are common, and enough to get us started).

Reproducibility is a bedrock of science – we all know that our methods section should contain enough detail to allow an independent researcher to repeat our experiment. With the increasing use of computational methods in psychology, there’s increasing need – and increasing ability – for us to share more than just a description of our experiment or analysis.

Reproducible methods

Using sites like the Open Science Framework you can share stimuli and other materials. If you use open source experiment software like PsychoPy or Tatool you can easily share the full scripts which run your experiment and people on different platforms and without your software licenses can still run your experiment.

Reproducible analysis

Equally important is making your analysis reproducible. You’d think that with the same data, another person – or even you in the future – would get the same results. Not so! Most analyses include thousands of small choices. A mis-step in any of these small choices – lost participants, copy/paste errors, mis-labeled cases, unclear exclusion criteria – can derail an analysis, meaning you get different results each time (and different results from what you’ve published).

Fortunately a solution is at hand! You need to use analysis software that allows you to write a script to convert your raw data into your final output. That means no more Excel sheets (no history of what you’ve done = very bad – don’t be these guys) and no more point-and-click SPSS analysis.

Bottom line: You must script your analysis – trust me on this one

Open data + code

You need to share and document your data and your analysis code. All this is harder work than just writing down the final result of an analysis once you’ve managed to obtain it, but it makes for more robust analysis, and allows someone else to reproduce your analysis easily in the future.

The most likely beneficiary is you – you most likely collaborator in the future is Past You, and Past You doesn’t answer email. Every analysis I’ve ever done I’ve had to repeat, sometimes years later. It saves time in the long run to invest in making a reproducible analysis first time around.

Further Reading

Nick Barnes: Publish your computer code: it is good enough

British Ecological Society: Guide to Reproducible Code

Gael Varoquaux : Computational practices for reproducible science


Reproducible Computational Workflows with Continuous Analysis

Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research

Part of a series for graduate students in psychology.
Part 1: pre-registration.
Part 2: the Open-Science Framework.

Cross-posted at

Written by Comments Off on Reproducibility Posted in Research

2017 review

Things that have consumed my attention in 2017…

Teaching & Public Engagement

At the beginning of the year I taught my graduate seminar class on cognitive neuroscience, and we reviewed Cordelia Fine’s “Delusions of Gender” and the literature on sex differences in cognition. I blogged about some of the topics covered (linked here), and gave talks about the topic at Leeds Beckett and the University of Sheffield. It’s a great example of a situation that is common to so much of psychology: strong intuitions guide interpretation as much as reliable evidence.

In the autumn I helped teach undergraduate cognitive psychology, and took part in the review of our entire curriculum as the lead of the “cognition stream”. It’s interesting to ask exactly what a psychology student should be taught about cognitive psychology over three years.

In January I have a lecture at the University of Greenwich on how cognitive science informs my teaching practice, which you can watch here: “Experiments in Learning”.

We organised a series of public lectures on psychology research, Mind Matters. These included Gustav Kuhn and Megan Freeth talking about the science of magic, and Sophie Scott (who gave this year’s Christmas Lectures at the Royal Institution) talking about the science of laughter. You can read about the full programme on the Mind Matters website. Joy Durrant did all the hard work for these talks – thanks Joy!



Using big data to test cognitive theories.

Our paper “Many analysts, one dataset: Making transparent how variations in analytical choices affect results” is now in press at (the new journal) Advances in Methods and Practices in Psychological Science. See previous coverage in Nature (‘Crowdsourced research: Many hands make tight work’) and 538 (Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.). This paper is already more cited than many of mine which have been published for years.

On the way to looking at Chess players’ learning curves I got distracted by sex differences: surely, I thought, Chess would be a good domain to discover the controversial ‘stereotype threat’ effect? It turns out Female chess players outperform expectations when playing men (in press at Psychological Science).

Wayne Gray edited a special issue of Topics in Cognitive Science: Game XP: Action Games as Experimental Paradigms for Cognitive Science, which features our paper Testing sleep consolidation in skill learning: a field study using an online game..

I presented this work at a related symposium at CogSci17 in London (along with our work on learning in the game Destiny), at a Psychonomics Workshop in Madison, WI : Beyond the Lab: Using Big Data to Discover Principles of Cognition at a Pint of Science in Sheffield (video here).

Our map of Implicit Racial bias in europe sparked lots of discussion (and the article was read nearly 2 million times at The Conversation).


Trust and reason

I read Hugo Mercier and Dan Sperber’s ‘The Enigma of Reason: A New Theory of Human Understanding’ and it had a huge effect on me, influencing a lot of the new work I’ve been planning this year. (my review in the Times Higher here).

In April I went to a British Academy roundtable meeting on ‘Trust in Experts’. Presumably I was invited because of this research, why we don’t trust the experts. Again, this has influenced lots of future plans, but nothing to show yet.

Related, we have AHRC funding for our project Cyberselves: How Immersive Technologies Will Impact Our Future Selves. Come to the workshop on the effects of teleoperation and telepresence, in Oxford in February.


Decision making

Our Leverhulme project on implicit bias and blame wound up. Outputs in press or preparation:

My old Phd students Maria Panagiotidi, Angelo Pirrone and Cigar Kalfaoglu have also published papers, with me as co-author, making me look more prolific than I am. See the publications page.


The highlight of the year has been getting to speak to and work with so many generous, interesting, committed people. Thanks and best wishes to all.

Previous years’ reviews: 2016 review, 2015 review.

Cyberselves: How Immersive Technologies Will Impact Our Future Selves

robodogWe’re happy to announce the re-launch of our project ‘Cyberselves: How Immersive Technologies Will Impact Our Future Selves’. Straight out of Sheffield Robotics, the project aims to explore the effects of technology like robot avatars, virtual reality, AI servants and other tech which alters your perception or ability to act. We’re interested in work, play and how our sense of ourselves and our bodies is going to change as this technology becomes more and more widespread.

We’re funded by the AHRC to run workshops and bring our roadshow of hands on cyber-experiences to places across the UK in the coming year. From the website:

Cyberselves will examine the transforming impact of immersive technologies on our societies and cultures. Our project will bring an immersive, entertaining experience to people in unconventional locations, a Cyberselves Roadshow, that will give participants the chance to transport themselves into the body of a humanoid robot, and to experience the world from that mechanical body. Visitors to the Roadshow will also get a chance to have hands-on experiences with other social robots, coding and virtual/augmented reality demonstrations, while chatting to Sheffield Robotics’ knowledgeable researchers.

The project is a follow-up to our earlier AHRC project, ‘Cyberselves in Immersive Technologies‘, which brought together robotics engineers, philosophers, psychologists, scholars of literature, and neuroscientists.

We’re running a workshop on the effects of teleoperation and telepresence, in Oxford in February (Link).

Call for papers: symposium on AI, robots and public engagement at 2018 AISB Convention (April 2018).

Project updates on twitter, via Dreaming Robots (‘Looking at robots in the news, films, literature and the popular imagination’).

Cross-posted at

Written by Comments Off on Cyberselves: How Immersive Technologies Will Impact Our Future Selves Posted in Projects