Research Ambitions

I study learning and decision making. Much of my research looks at risk and bias, and their management, in decision making. I am also interested in skill learning, using measures of behaviour informed by work done in computational theory, robotics and neuroscience. More recently a strand of my research looks at complex decisions, and the psychology of reason, argument and persuasion.

Three core ambitions of my research are:

  • Data Intensive Methods – robust, scalable, reproducible experiments and analysis which are transparent, sharable and work as well with 400,000 data points as they do with 40.
  • Interdisciplinarity – collaborating across all scholarly fields.
  • Public Engagement – listening to public interests, sharing research process and outcomes with non-specialists, giving back to the publics involved with research.

For details see these funded research projects, or these scholarly publications (also available from the tabs above). Or scroll down for latest news and thoughts. See this page for past and upcoming talks.

Job! Open Research Training Lead

Come and drive research improvement at the University of Sheffield! Work with me as Open Research Training Lead to design, deliver and evaluate training in open research, for researchers in all disciplines. This is a 5 year+ post, full time, at a post-doctoral grade (G7), part of our national UKRN/Research England project.

This role will work across the University, and nationally with the UK Reproducibility Network (UKRN), to accelerate the uptake of open research practices. By empowering researchers to engage with better practice around research transparency, reuse and reliability, this role is key to the University Vision of an open research culture across all disciplines, which supports research quality and integrity, and deserves public trust in academic research

It would suit someone who knows and cares about open research, and can work with researchers of all types to improve research practice. Applicants from a research (e.g. post-PhD) or research services background are encouraged.

Details here, deadline: December 15th 2021.

Any questions, please do get in touch by email or via twitter: @tomstafford

Update 2021-11-29.

I’ve had a few questions, which I’ll address here so everyone can enjoy the answers

Q: Will the role involve delivering workshops/training just within Sheffield, or across the 18 other institutions.

A: Primarily within Sheffield. The focus will be on training for University of Sheffield researchers, but the role involves coordinating with all UKRN partner institutions, so it is probable that there would be some exchange visits around the UK.

Q: How much technical knowledge do you need?

A: Comfort with using, but not expertise, with technical platforms (e.g. R, github) is necessary. Nobody can be expert in everything, and for the most technical areas of open research we have the support of our RSE team, meaning that a trainer in this role could rely on them for the domain knowledge.

Q:Who decides on the training that will be designed and delivered?

A: You would! Designing a programme of comprehensive open research training, in coordination with the Research Practice Lead (Tom Stafford), existing University of Sheffield provision and the wider community, including the UKRN consortium is part of the role

Q: What does 5+ years mean?

A: It means we have guaranteed funding for 5 years, as part of the UKRN/REDF project but the University would like to extend the role beyond the project.

Q: What will it be like working with you? Can I have some references?

A: Feel free to contact anyone I have previously supervised, there is a list here. For the last few years I’ve asked current supervisees to speak, in confidence, to anyone thinking of applying to work with me, so they shouldn’t be surprised to hear from you, and I won’t ask who they have spoken to or what they have spoken about.

(okay, I wasn’t really asked this question, but I thought I’d allow myself to ask, and answer it, for you).

Q: When is the start date?

A: As soon as possible

Q: When will interviews be held?

A: January 12th. We’re hoping to shortlist before Christmas, so will be in touch early in the new year at latest

Q: I understand that experience with designing workshops and training sessions are crucial. Would you encourage people with no such prior experience to apply too?

A: The answer for questions like this is to review the person specification in the About The Job document, paying careful attention to what is essential and what is desirable.

It will be very difficult for us to appoint anyone who doesn’t meet the essential criteria. With respect to experience of designing workshops, the most relevant criterion is “A good understanding of how to design and deliver successful and challenging training to research professionals, which promotes debate and discussion to promote and evolve good practice”. I could imagine a candidate demonstrating that they meet this criteria (“understanding”) without direct experience of having previously designed and run such workshops, but probably you can see that having actual experience is far more likely to be convincing.

The bottom line is that there is flexibility in how different candidates will meet the criteria. If you have any doubts, please get in touch and I’m happy to discuss. I will be sad if good candidates discount themselves because they are uncertain.

PhD Opportunity: “Informing citizens? Effects of imprints on digital political advertising”

A fully funded PhD position starting October 2021. Application deadline: 12th March 2021 (interviews: 26th of March)

Supervised by Dr Tom Stafford, Department of Psychology, and Dr Kate Dommett from the Department of Politics, at the University of Sheffield, UK and in collaboration with the Electoral Reform Society (where the student will intern during their studies). The studentship will run alongside our Leverhulme Trust project, “Understanding online political advertising: perceptions, uses and regulation

As well as doctoral training in experimental psychology, advanced training in quantitative methods and open research, experience of interdisciplinary and policy engaged research, this studentship also comes with an opportunity to complete a MSc in Research Methods if you are coming straight from an undergraduate degree.

The project outline is below. Informal enquiries are welcome by email

Political campaigning is increasingly carried out online, affording campaigners new possibilities for targeting and customisation of campaign material to different audiences. These developments have changed the information landscape, having consequences for the democratic ideal of an informed citizenry. As a consequence, policy makers have argued that voters need to be given additional information through transparency disclosures, a.k.a “imprints”, with Minister Chloe Smith arguing that:

Democracy must work for the people – and empowering our citizens to participate makes our country stronger. However, there is growing concern about the transparency of the sources of political campaigning online, which is starting to have a negative impact on trust and confidence in our elections and democracy” (Cabinet Office, 2020, p.7)

Whilst the Government has begun to pursue policies designed to boost transparency, the impact of using mandatory information disclosures on online political campaign materials (“digital imprints”) on public attitudes and behaviour is unknown, making it unclear whether attempts to inform citizens will boost public confidence and trust, or result in ‘backfire’ effects.

Developed in partnership with the Electoral Reform Society (ERS), who have an active research programme on voter information and digital campaigning, this studentship will use survey and experimental designs to explore the effect of different regulatory responses designed to promote transparency and an informed citizenry. The student will test the impact of different possible digital imprints on voter response. Of interest is how voters use imprints to inform their interpretation of specific pieces of political information (e.g. digital campaign adverts). The project is also concerned with the overall impact on trust and confidence in political actors and the democratic system. The student will identify best practice for future regulation and policy design.

Reference: Cabinet Office (2020). Transparency in digital campaigning: technical consultation on digital imprints.

This project is funded by the ESRC WRDTP. Fees are paid at the UK level and a stipend of £15,285/year provided (+ additional funds to support research training).

The award is available on either a 1+3 or +3 basis. A 1+3 studentship provides funding for four years, completing the MA in Social Research in the 1st year, followed by 3 years research funding for a PhD. A +3 studentship provides funding for three years of PhD; this is only available to candidates who already have an MA in Social Research or a comparable Masters in research methods.

For additional details please see

The student needs to commence their studies on 1st October 2021.

Eligibility: The candidate should have a strong academic background in psychology, with a 1st or strong 2:1 undergraduate degree predicted or obtained.

Please direct any questions about eligibility to

To apply, please email by the deadline with

– a CV (1 page); please highlight any relevant project work.

– a cover letter explaining why you want to do a PhD and this PhD in particular (1 page); please state whether you are applying for the 1+3 or +3 route.

– proposal for how the effect of imprints on public confidence and trust could be investigated using the tools of experiment psychology (no more than 2 pages). This will introduce you own ideas, including brief details on both rationale and research methodology.

For wider literature introducing the topic of political advertising and it’s democratic significance see: 

Barnard, L and Kreiss, D. (2013) ‘A Research Agenda for Online Advertising: Surveying Campaign Practices, 2000-2012’, International Journal of Communication, 2046-2066.

 Dommett, K. (2019) ‘The Rise of Online Political Advertising’, Political Insight, 10(4). 

Dommett, K., & Power, S. (2019). The political economy of Facebook advertising: Election spending, regulation and targeting online. The Political Quarterly, 90(2), 257-265.

Kim, T., Barasz, K., and John, L. (2018) ‘Why Am I Seeing this Ad? The Effect of Ad Transparency on Ad Effectiveness’, Journal of Consumer Research, 45(5): 906-932.

Links: FindAPhD

Engaging Dialogue Generated From Argument Maps


Jan 2021 update: Project is go! Pages here: Opening Up Minds: engaging dialogue generated from argument maps. In Sheffield, we are lucky to be joined by Dr Lotty Brand

Starting January 2021, a two year post-doctoral research associate position. Skills required: experiment design, measure validation, online recruitment and testing, coding and/or statistical computing skills, background in psychology, linguistics or NLP generally, in reason, argument or dialogue specifically. Applications open later this year. Informal enquiries welcome at any point.

The EPSRC has funded our project ‘Opening Up Minds: Engaging Dialogue Generated From Argument Maps’, led by Paul Piwek (Computer Science, Open University), with myself, Andreas Vlachos (Computer Science, University of Cambridge) and Svetlana Stonyanchev (Toshiba Research Europe).

The idea is to design a “dialogue system” interface to existing databases of the arguments surrounding controversial topics such as “Should the United Kingdom remain a member of the European Union?” or “Should all humans be vegan?”. In particular, a user can have a “Moral Maze” style chat with the dialogue system.

“Moral Maze” is a longrunning popular BBC 4 Radio programme in which a panel discusses a controversial topic with the help of witnesses and a host who chairs the conversation. The dialogue system consists of a panel of Argumentation Bots (ArguBots) who present arguments for or against the topic under discussion (the pro and con ArguBots), a host ArguBot and a witness ArguBot (that can provide detailed evidence). The user is invited to join the panel and voice their views on the topic under discussion. Thus the user can explore what they thought and what others thought about the controversial topic.

An important part of the projects will be to evaluate the effects on people’s appreciation of the complexity of debate and attendant ability to comprehend the world from other people’s point of view or perspective.

The computer science research will focus on developing the dialogue agents (‘bots’) to allow users to explore controversial topics through natural language conversations. Our hope is such conversation can be engaging, and also free of the polarisation that we see in human-human interactions over social media around controversial topics.

My job will be to lead on WP3, which will look at evaluating how people experience the dialogues, and how their attitudes and beliefs are affected.

Here’s what we said about that in the proposal:

Work Package 3 – Evaluation (lead: Sheffield) Work on this WP will begin immediately with validation of the measures of open-mindedness, attitude strength and perception of argument coherence, and with the establishment of procedures for participant recruitment and testing. Importantly we need to develop an appropriate control condition which will act as a baseline against which any benefits of engaging with the argument-map via a ArguBots will be gauged. Development of the measures and control condition will also allow statistical power analysis to ensure that subsequent testing recruits enough participants to measure the effects of interest with sufficient accuracy. These can proceed before the full dialogue system development is finalised, using a Wizard-of-Oz (WoZ) protocol. As the dialogue system is developed, this WP will support continuous testing and feedback, allowing user behaviour to be integrated into development. The collected user utterances will be used as additional data to train and evaluate the components of the dialogue system. At fixed points, experiments will be conducted which test the impact of the dialogues on the participants at the three levels of 1) perception of coherence, 2) engagement and 3) impact on attitudes and beliefs. Because of the common cognitive bias to overestimate the extent of our insight into argument structure (see background), testing of the impact on attitudes and beliefs will use direct surveys, as well as before-after testing and novel implicit measures developed for the project and designed to test participants’ comprehension of opposing arguments (i.e. ability to pass the Ideological Turing Test). Ethics approval will be obtained for all data collection and evaluation with human subjects, and this will include necessary steps to mitigate ethical risks (e.g. including procedures for data deletion in the event that participants reveal personal information during the decision making task).

So we’ll be recruiting a post-doc for the project, to work with me in Sheffield and collaborate with the project partners at the OU and in Cambridge. The project starts mid January 2021 and applications will open later in year. I’ll have a better idea of the job specification then, but I expect the ideal candidate will have a background in experimental research with online platforms, be interested and/or informed about the psychology of reason, argument and dialogue and comfortable with interdiscipinary approaches (in particular working with NLP/Computer Science communities)

Informal enquiries are welcome at any time. Hit me up by email or on twitter.

Understanding online political advertising: perceptions, uses and regulation

The Leverhulme Trust has funded this project, led by Dr Kate Dommett (Department of Politics), and involving myself and Dr Nikos Aletras (Department of Computer Science).

Here’s the project abstract:

Microtargeted advertising is revolutionising political campaigning. Despite widespread adoption, and strong claims of efficacy, there is no systematic account of the rationale behind targeted ad campaigns, nor their perception by citizens. This lack impedes the design and implementation of an appropriate response from government or industry. Using voter surveys, in depth interviews with campaigners and analysis of online ad archives augmented by machine learning, this grant will explore the logic and practice of political advertising. It will place the regulation of political advertising within a broader framework of human rationality and the legitimate role of persuasion in politics.

It is due to start in January 2021, and we’ll be hiring two post-docs (each three year posts). One post-doc with a background in political science will conduct research interviews with advertisers, campaigners, policy makers and stakeholders (but also help me with experimental survey design, aimed at gauging public perceptions of targeted ads – the “folk theories” of how they work and should be regulated); the other post-doc will have a background in natural language process/machine learning and work on automated text and network analyses of ad archives and social media data. If you have a PhD, or will have a PhD by January, and could fill one of these positions, please get in touch now to discuss.

Remarks at “Re-energising the narrative: human rights in the digital age”

Notes on my talk at Wilton Park’s “Re-energising the narrative: human rights in the digital age (WP1655)“. Wilton Park is an agency of the Foreign and Commonwealth Office which organises discussion events focussed on international security, prosperity and justice.

From the event outline:
“The event will consider the specific threats presented by abuse via social media platforms, the ‘echo chamber’ effect on critical thinking and policy making, and the deliberate exploitation of divisions in societies, eg computational propaganda/’fake news’, amplification by algorithms and systematised trolling.”

What does a psychologist have to offer here? I think the first thing is an apology, on behalf of my profession.

In our individualistic, narcissistic age, psychology is a growth area. Psychologists have been making hay from promoting the idea that people are irrational, that our thinking is riddled with systematic errors and delusions.

The title of popular science books are an excellent lens on this. Go to the psychology section and you’ll find books with titles like “You Are Not So Smart” by David McRaney, like “Predictably Irrational” by Dan Ariely. Both good books, but you see the theme.

Perhaps the most celebrated psychologist of recent years, Daniel Kahneman, whose work is foundational to behavioural economics, which led directly to the idea of nudge, wrote “Thinking Fast and Slow“, a book which describes our minds as divided, and often dominated by a fast, stupid, system which, in the words of one commentator portrays humans as basically “spending all their time failing”.

Who benefits from this? Well, obviously we, the psychologists do. If human reasoning was straightforward then we’d be out of work. But the emphasise psychologists have put on studies of reasoning is profoundly limited.

So as well as an apology, I want to offer you some advice about the limitations of this work. To do this, let’s pick an example from the experimental study of communication, a study from Stanford by Paul Thibodeau and Lera Bordidsky.

These two ran a study on perception of crime and crime control policy, where they asked participants to read a newspaper story about crime in a US small town.

Half the participants saw a version of the story where crime was described as a beast stalking the citizens, half saw a version where crime was described as a virus infecting the city.

Two version, two metaphors for crime. And then all participants were asked which policies they would support to deal with crime and their responses were recorded.

And this is how experimental psychology works – measurement (of people’s policy support) and comparison (which metaphor people read about in the news paper). Now the result isn’t so important but I’m sure you’ll want to know, and probably won’t be surprised by, the finding that people exposed to the beast metaphor offered more support for policies aimed at capture, enforcement and punishment: more police on the streets, longer prison sentences – and those exposed to the virus metaphor offered more support for policies aimed a diagnosis, treatment and inoculation – more education, fix the economy, resources to get kids out of gangs and so on.

But this study contains its own biases, and they are illustrative of the limitations of many such experiments.

First, because it works by comparing two conditions, it highlights the effect of the manipulation – the metaphor used, in this case – but at the cost of downplaying every other factor which influences people’s judgements.

The experiment allows you to see the difference the metaphor makes to people’s judgements, but renders the other reasons for invisible. And this common. Psychologists love experiments in which superficial changes creates differences between groups, but often we don’t put the size of those differences into context.

In this way, experiments on biases in perception and decision making – which psychologists love to run – tell a very dangerous half truth about human reasoning. They tell the story of how our judgements can be swayed by superficial or distracting factors – all true – but neglect the story of how we come to arrive at our beliefs in the first place, at the profound role reasoning and reflection play.

The second bias in many experiments on communication is that they almost invariably look at immediate effects. We ask people to take part in our experiment, and this typically involves the manipulation and the measure at the same time point. Almost nobody gets participants back to see how their beliefs have changed a week later, or a month, or a year. That’s too difficult.

So this creates another blindspot in our experimentally informed view of the world – we see things which have an immediate effect, which push our views and beliefs around: emotion, images, etc. But we’re blind to stuff which works its effects longer term. This matters, I argue, because one thing which has profound long terms effects are good reasons and moral values.

The current fashion for a psychology preoccupied with our biases and limitations underestimates the common inheritance we all have as reasoning and moral beings. Worse, by promoting a view of human nature as irrational, it panders to the view that the only way to persuade people is through cheap tricks which trigger biases. By acting as if this is true we risk making it so. If we believe that there’s no point arguing with some people – that they are irredeemably biased and irrational, beyond persuasion – we may abandon any attempt at persuasion by reasoned argument.

I’ve an optimistic faith in human rationality. We’re not perfect, but we can connect with people who disagree.

So the challenge is to communicate effectively, without giving into the very partial view of human nature that psychology can seem to promote.

There is better and worse communication, yes; there is messaging which evokes our biases and so is more likely to be rejected, and messaging which works with the grain of the way we reason.

George Lakoff is a psychologist who is well known for his work on metaphors and frames. Frames are the background ideas – metaphors – which determine the context for people’s reasoning. His claim is that frames can be used to control the contours of a political debate, most notably in determining what people try and refute, and that each refutation reinforces the frame of the idea. So, for example, there is tax relief, the term for tax cuts promoted by US Republicans, which smuggles in the metaphor of tax as burden. So, Lakoff says, whether you are arguing for or against any particular case of tax relief you have conceded the general idea that tax is a burden, and we all know that, ultimately, burdens should be lifted.

A criticism of Lakoff is it opens the door to a sort of arms race where everyone tries to weaponise their language for maximum advantage. And maybe that would be true if we thought of framing as a cheap trick, a surface property which we could add to any messaging after we had already determined what we wanted to say. I’d argue a better understanding of Lakoff’s framing is that it gives a way to connect our message to our common values, to share those values in way that connects with understandings that our audience already have.

An important part of Lakoff’s book about political framing is the recognition that American conservatives have long recognised the importance of ideas, and funded institutions – from think tanks to talk radio – which seed the frames in minds of voters which political messages later target and exploit. Framing, in this view, is no surface property, but a way of quickly connecting to a deep history of ideas, values and community building.

To finish, I’d like to end on a positive example of framing from the city where I live and work. City of Sanctuary, is a network of local organisations, which started in Sheffield, which has the aim of creating a culture of hospitality for those fleeing violence and persecution.

Notice the framing, and how it differs from the dominant metaphors surrounding asylum and immigration in the UK. That debate is so toxic that the phrase asylum seeker seems to come with a silent “bogus” at the front, and immigrant with a silent “illegal”. You could try and counter this with myth busting – show the statistics that most immigration is legal, explain the legitimate reasons for seeking asylum, but you’d be playing into the trap Lakoff outlines of reinforcing the frame of migration as illegitimate and suspect in general, even as you try and rebut it in the particulars.

City of Sanctuary sidesteps that and harks to a fundamental idea that we all recognise – of the sacredness of sanctuary, of protection for those who need it. It asks us to think about the duties of hosts, of those fortunate to have shelter, to share it with those in need. It’s a brilliant bit of framing, and not superficial trick. In allows, in a few sentences, the fundamental values of an organisation to be summed up and communicated.

So, in conclusion, remember that the evidence on the psychology of communication often disguises as much as it reveals, that it has a bias toward showing the immediate influence of surface changes, rather than the enduring power of reasons, arguments and values. There are ways to connect our deep principles with persuasive messages, and I’m looking forward to discussing the details of that with you over the next few days.

This is more or less what I said at Wilton Park, 14 January 2019. For more on the counter-literature in psychology which shows the power of reasoned argument, see my ‘For argument’s sake: evidence that reason can change minds‘. For a profound recent account of the psychology of human reasoning see “The Enigma of Reason: A New Theory of Human Understanding”, by Hugo Mercier and Dan Sperber (my review of this book here).

The Choice Engine

How and why do we choose? Are our choices free, or determined by our past, our brains or our environment? Are our choices ours? The Choice Engine is an interactive essay which unfolds according to what you choose to read about next.

Experience it by visiting @ChoiceEngine on Twitter.

We’ll be discussing the project and the ideas behind it as part of the Festival of the Mind at 4pm on the 25th of September in the Speigeltent, Barker’s Pool. This event brings together the team behind the Choice Engine and scholars of choice from psychology, neuroscience and the arts to discuss choice and free will.


– Jon Cannon, Designer

– Tom Stafford, Department of Psychology

– Helena Ifill, School of English

And the chance for audience questions and interventions

This event is free, all welcome


Open science essentials in 2 minutes, part 4

Before a research article is published in a journal you can make it freely available for anyone to read. You could do this on your own website, but you can also do it on a preprint server, such as, where other researchers also share their preprints. Psyarxiv allows others to find your research easily, and is supported by the OSF, which means it has the support to hold your papers for the long term.

Preprint servers have been used for decades in physics, but are now becoming more common across academia. Preprints allow rapid dissemination (and citation) of your research, which is especially important for early career researchers. Preprints can be cited and indexing services like Google Scholar will join your preprint citations with the record of your eventual journal publication.

Preprints also mean that work can be reviewed (and errors-caught) before final publication.

What happens when my paper is published?

Your work is still available in preprint form, which means that there is a non-paywalled version and so more people will read and cite it. If you upload a version of the manuscript after it has been accepted for publication that is called a post-print.

What about copyright?

Mostly journals own the formatted, typeset version of your published manuscript. This is why you often aren’t allowed to upload the PDF of this to your own website or a preprint server, but there’s nothing stopping you uploading a version with the same text (so the formatting will be different, but the information is the same).

Will journals refuse my paper if it is already “published” via a preprint?

Most journals allow, or even encourage preprints. A diminishing minority don’t. If you’re interested you can search for specific journal policies here.

Will I get scooped?

Preprints allow you to timestamp your work before publication, so they can act to establish priority on a findings which is protection against being scooped. Of course, if you have a project where you don’t want to let anyone know you are working in that area until you’re published, preprints may not be suitable.

When should I upload a preprint?

Upload a preprint at the point of submission to a journal, and for each further submission and upon acceptance (making it a postprint).

What’s to stop people uploading rubbish to a preprint server?

There’s nothing to stop this, but since your reputation for doing quality work is one of the most important things a scholar has I don’t recommend it.

Useful advice:

Put clear headers in your pre-print, noting the version and/or data, and the status of the pre-print (e.g. under review or published). When you are published you can upload a version with a header saying “Please cite as [xxxxxxx]”).

Useful links:

Part of a series:

  1. Pre-registration
  2. The Open Science Framework
  3. Reproducibility

Cross-posted at

Quit while you’re ahead: a surprising interaction between game performance and motivation

Over the last nine months I’ve been lucky enough to work with Dagmar Adamcová, who has been at the University of Sheffield on an Erasmus scheme internship during her MSc studies at Masaryk University in the Czech Republic.

Dagmar’s project focused on investigating an oddity in player behaviour from a simple online game I have data for: in this game the players tended to quit on a high score. The game is Axon, which I’ve published papers on previously, showing patterns in how people get better at the game with practice. Normally I average over players who play different numbers of games, and in the average of performance against play attempt we see the typical learning curve (people get better quickly at first, then the rate of increase slows down).

The odd pattern of behaviour which Dagmar looked at can be seen clearly if we divide players into subgroups who play exactly the same number of times, and plot their average performance:

(graph from Dagmar’s report)

As you can see, this isn’t a typical smooth learning curve. Players’ average performance *leaps* on their last game. What’s going on? Well Dagmar set out to investigate, and has published her analysis as a Jupyter notebook showing the analysis code, the results and the explanation of what she did.

What she found was evidence that unusually high scores let you predict games on which players will quit. Further, she found that predicting when players will quit is enhanced if you include a psychological definition of ‘high score’. Specifically, the ratio of any particular players latest score to their previous best allows better predictions of when they will quit.

The result is surprising because we normally assume that players of games like to win (and indeed, if success is rewarding we would normally predict that failure, not success, would lead to quitting). My theory is that players are “managing their hedonic experience”, or – as you might say in plain English – quitting while they are ahead.

We’d be interested to hear from anyone who has data which shows a similar interaction between performance and motivation. If you’ve seen a similar thing, please get in touch.

Read Dagmar’s full analysis in her notebook: Quit while you’re ahead: a surprising interaction between game performance and motivation.