PhD Opportunity: “Informing citizens? Effects of imprints on digital political advertising”

A fully funded PhD position starting October 2021. Application deadline: 12th March 2021 (interviews: 26th of March)

Supervised by Dr Tom Stafford, Department of Psychology, and Dr Kate Dommett from the Department of Politics, at the University of Sheffield, UK and in collaboration with the Electoral Reform Society (where the student will intern during their studies). The studentship will run alongside our Leverhulme Trust project, “Understanding online political advertising: perceptions, uses and regulation

As well as doctoral training in experimental psychology, advanced training in quantitative methods and open research, experience of interdisciplinary and policy engaged research, this studentship also comes with an opportunity to complete a MSc in Research Methods if you are coming straight from an undergraduate degree.

The project outline is below. Informal enquiries are welcome by email


Political campaigning is increasingly carried out online, affording campaigners new possibilities for targeting and customisation of campaign material to different audiences. These developments have changed the information landscape, having consequences for the democratic ideal of an informed citizenry. As a consequence, policy makers have argued that voters need to be given additional information through transparency disclosures, a.k.a “imprints”, with Minister Chloe Smith arguing that:

Democracy must work for the people – and empowering our citizens to participate makes our country stronger. However, there is growing concern about the transparency of the sources of political campaigning online, which is starting to have a negative impact on trust and confidence in our elections and democracy” (Cabinet Office, 2020, p.7)

Whilst the Government has begun to pursue policies designed to boost transparency, the impact of using mandatory information disclosures on online political campaign materials (“digital imprints”) on public attitudes and behaviour is unknown, making it unclear whether attempts to inform citizens will boost public confidence and trust, or result in ‘backfire’ effects.

Developed in partnership with the Electoral Reform Society (ERS), who have an active research programme on voter information and digital campaigning, this studentship will use survey and experimental designs to explore the effect of different regulatory responses designed to promote transparency and an informed citizenry. The student will test the impact of different possible digital imprints on voter response. Of interest is how voters use imprints to inform their interpretation of specific pieces of political information (e.g. digital campaign adverts). The project is also concerned with the overall impact on trust and confidence in political actors and the democratic system. The student will identify best practice for future regulation and policy design.

Reference: Cabinet Office (2020). Transparency in digital campaigning: technical consultation on digital imprints. https://www.gov.uk/


This project is funded by the ESRC WRDTP. Fees are paid at the UK level and a stipend of £15,285/year provided (+ additional funds to support research training).

The award is available on either a 1+3 or +3 basis. A 1+3 studentship provides funding for four years, completing the MA in Social Research in the 1st year, followed by 3 years research funding for a PhD. A +3 studentship provides funding for three years of PhD; this is only available to candidates who already have an MA in Social Research or a comparable Masters in research methods.

For additional details please see https://wrdtp.ac.uk/studentships/.

The student needs to commence their studies on 1st October 2021.

Eligibility: The candidate should have a strong academic background in psychology, with a 1st or strong 2:1 undergraduate degree predicted or obtained.

Please direct any questions about eligibility to pgr-scholarships@sheffield.ac.uk


To apply, please email t.stafford@sheffield.ac.uk by the deadline with

– a CV (1 page); please highlight any relevant project work.

– a cover letter explaining why you want to do a PhD and this PhD in particular (1 page); please state whether you are applying for the 1+3 or +3 route.

– proposal for how the effect of imprints on public confidence and trust could be investigated using the tools of experiment psychology (no more than 2 pages). This will introduce you own ideas, including brief details on both rationale and research methodology.


For wider literature introducing the topic of political advertising and it’s democratic significance see: 

Barnard, L and Kreiss, D. (2013) ‘A Research Agenda for Online Advertising: Surveying Campaign Practices, 2000-2012’, International Journal of Communication, 2046-2066.

 Dommett, K. (2019) ‘The Rise of Online Political Advertising’, Political Insight, 10(4). 

Dommett, K., & Power, S. (2019). The political economy of Facebook advertising: Election spending, regulation and targeting online. The Political Quarterly, 90(2), 257-265.

Kim, T., Barasz, K., and John, L. (2018) ‘Why Am I Seeing this Ad? The Effect of Ad Transparency on Ad Effectiveness’, Journal of Consumer Research, 45(5): 906-932. https://doi.org/10.1093/jcr/ucy039


Links: FindAPhD

Understanding online political advertising: perceptions, uses and regulation

The Leverhulme Trust has funded this project, led by Dr Kate Dommett (Department of Politics), and involving myself and Dr Nikos Aletras (Department of Computer Science).

Here’s the project abstract:

Microtargeted advertising is revolutionising political campaigning. Despite widespread adoption, and strong claims of efficacy, there is no systematic account of the rationale behind targeted ad campaigns, nor their perception by citizens. This lack impedes the design and implementation of an appropriate response from government or industry. Using voter surveys, in depth interviews with campaigners and analysis of online ad archives augmented by machine learning, this grant will explore the logic and practice of political advertising. It will place the regulation of political advertising within a broader framework of human rationality and the legitimate role of persuasion in politics.

It is due to start in January 2021, and we’ll be hiring two post-docs (each three year posts). One post-doc with a background in political science will conduct research interviews with advertisers, campaigners, policy makers and stakeholders (but also help me with experimental survey design, aimed at gauging public perceptions of targeted ads – the “folk theories” of how they work and should be regulated); the other post-doc will have a background in natural language process/machine learning and work on automated text and network analyses of ad archives and social media data. If you have a PhD, or will have a PhD by January, and could fill one of these positions, please get in touch now to discuss.

Research Ambitions

I study learning and decision making. Much of my research looks at risk and bias, and their management, in decision making. I am also interested in skill learning, using measures of behaviour informed by work done in computational theory, robotics and neuroscience. More recently a strand of my research looks at complex decisions, and the psychology of reason, argument and persuasion.

Three core ambitions of my research are:

  • Data Intensive Methods – robust, scalable, reproducible experiments and analysis which are transparent, sharable and work as well with 400,000 data points as they do with 40.
  • Interdisciplinarity – collaborating across all scholarly fields.
  • Public Engagement – listening to public interests, sharing research process and outcomes with non-specialists, giving back to the publics involved with research.

For details see these funded research projects, or these scholarly publications (also available from the tabs above). Or scroll down for latest news and thoughts. See this page for past and upcoming talks.

Reproducibility

Open science essentials in 2 minutes, part 3

Let’s define it this way: reproducibility is when your experiment or data analysis can be reliably repeated. It isn’t replicability, which we can define as reproducing an experiment and subsequent analysis and getting qualitatively similar results with the new data. (These aren’t universally accepted definitions, but they are common, and enough to get us started).

Reproducibility is a bedrock of science – we all know that our methods section should contain enough detail to allow an independent researcher to repeat our experiment. With the increasing use of computational methods in psychology, there’s increasing need – and increasing ability – for us to share more than just a description of our experiment or analysis.

Reproducible methods

Using sites like the Open Science Framework you can share stimuli and other materials. If you use open source experiment software like PsychoPy or Tatool you can easily share the full scripts which run your experiment and people on different platforms and without your software licenses can still run your experiment.

Reproducible analysis

Equally important is making your analysis reproducible. You’d think that with the same data, another person – or even you in the future – would get the same results. Not so! Most analyses include thousands of small choices. A mis-step in any of these small choices – lost participants, copy/paste errors, mis-labeled cases, unclear exclusion criteria – can derail an analysis, meaning you get different results each time (and different results from what you’ve published).

Fortunately a solution is at hand! You need to use analysis software that allows you to write a script to convert your raw data into your final output. That means no more Excel sheets (no history of what you’ve done = very bad – don’t be these guys) and no more point-and-click SPSS analysis.

Bottom line: You must script your analysis – trust me on this one

Open data + code

You need to share and document your data and your analysis code. All this is harder work than just writing down the final result of an analysis once you’ve managed to obtain it, but it makes for more robust analysis, and allows someone else to reproduce your analysis easily in the future.

The most likely beneficiary is you – you most likely collaborator in the future is Past You, and Past You doesn’t answer email. Every analysis I’ve ever done I’ve had to repeat, sometimes years later. It saves time in the long run to invest in making a reproducible analysis first time around.

Further Reading

Nick Barnes: Publish your computer code: it is good enough

British Ecological Society: Guide to Reproducible Code

Gael Varoquaux : Computational practices for reproducible science

Advanced

Reproducible Computational Workflows with Continuous Analysis

Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research

Part of a series for graduate students in psychology.
Part 1: pre-registration.
Part 2: the Open-Science Framework.

Cross-posted at mindhacks.com

2017 review

Things that have consumed my attention in 2017…

Teaching & Public Engagement

At the beginning of the year I taught my graduate seminar class on cognitive neuroscience, and we reviewed Cordelia Fine’s “Delusions of Gender” and the literature on sex differences in cognition. I blogged about some of the topics covered (linked here), and gave talks about the topic at Leeds Beckett and the University of Sheffield. It’s a great example of a situation that is common to so much of psychology: strong intuitions guide interpretation as much as reliable evidence.

In the autumn I helped teach undergraduate cognitive psychology, and took part in the review of our entire curriculum as the lead of the “cognition stream”. It’s interesting to ask exactly what a psychology student should be taught about cognitive psychology over three years.

In January I have a lecture at the University of Greenwich on how cognitive science informs my teaching practice, which you can watch here: “Experiments in Learning”.

We organised a series of public lectures on psychology research, Mind Matters. These included Gustav Kuhn and Megan Freeth talking about the science of magic, and Sophie Scott (who gave this year’s Christmas Lectures at the Royal Institution) talking about the science of laughter. You can read about the full programme on the Mind Matters website. Joy Durrant did all the hard work for these talks – thanks Joy!

 

Research:

Using big data to test cognitive theories.

Our paper “Many analysts, one dataset: Making transparent how variations in analytical choices affect results” is now in press at (the new journal) Advances in Methods and Practices in Psychological Science. See previous coverage in Nature (‘Crowdsourced research: Many hands make tight work’) and 538 (Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.). This paper is already more cited than many of mine which have been published for years.

On the way to looking at Chess players’ learning curves I got distracted by sex differences: surely, I thought, Chess would be a good domain to discover the controversial ‘stereotype threat’ effect? It turns out Female chess players outperform expectations when playing men (in press at Psychological Science).

Wayne Gray edited a special issue of Topics in Cognitive Science: Game XP: Action Games as Experimental Paradigms for Cognitive Science, which features our paper Testing sleep consolidation in skill learning: a field study using an online game..

I presented this work at a related symposium at CogSci17 in London (along with our work on learning in the game Destiny), at a Psychonomics Workshop in Madison, WI : Beyond the Lab: Using Big Data to Discover Principles of Cognition at a Pint of Science in Sheffield (video here).

Our map of Implicit Racial bias in europe sparked lots of discussion (and the article was read nearly 2 million times at The Conversation).

 

Trust and reason

I read Hugo Mercier and Dan Sperber’s ‘The Enigma of Reason: A New Theory of Human Understanding’ and it had a huge effect on me, influencing a lot of the new work I’ve been planning this year. (my review in the Times Higher here).

In April I went to a British Academy roundtable meeting on ‘Trust in Experts’. Presumably I was invited because of this research, why we don’t trust the experts. Again, this has influenced lots of future plans, but nothing to show yet.

Related, we have AHRC funding for our project Cyberselves: How Immersive Technologies Will Impact Our Future Selves. Come to the workshop on the effects of teleoperation and telepresence, in Oxford in February.

 

Decision making

Our Leverhulme project on implicit bias and blame wound up. Outputs in press or preparation:

My old Phd students Maria Panagiotidi, Angelo Pirrone and Cigar Kalfaoglu have also published papers, with me as co-author, making me look more prolific than I am. See the publications page.

 

The highlight of the year has been getting to speak to and work with so many generous, interesting, committed people. Thanks and best wishes to all.

Previous years’ reviews: 2016 review, 2015 review.

Funded PhD studentship

Funding is available for a PhD studentship in my department, based around a teaching fellowship. This means you’d get four years of funding but would be expected to help teach during your PhD.

Relevant suitability criteria include:

  • Being ready to start on 5th of February
  • Having completed an MSc with a Merit or Distinction
  • EU citizen
  • Background in psychology

Projects I’d like to supervise are here, including:

Analysing Big Data to understand learning (like this)

Online discussion: augmenting argumentation with chatbots (with Andreas Vlachos in Computer Science)

Improving skill learning (theory informed experiments!)

A PhD with me will involve using robust and open science methods to address theoretical ideas in cognitive science. Plus extensive mentoring on all aspects of the scholarly life, conducted in Sheffield’s best coffee shops.

Full details of the opportunity here. Deadline: 18th December. Get in touch!

The Open Science Framework

Open science essentials in 2 minutes, part 2

The Open Science Framework (osf.io) is a website designed for the complete life-cycle of your research project – designing projects; collaborating; collecting, storing and sharing data; sharing analysis scripts, stimuli, results and publishing results.

You can read more about the rationale for the site here.

Open Science is fast becoming the new standard for science. As I see it, there are two major drivers of this:

1. Distributing your results via a slim journal article dates from the 17th century. Constraints on the timing, speed and volume of scholarly communication no longer apply. In short, now there is no reason not to share your full materials, data, and analysis scripts.

2. The Replicability crisis means that how people interpret research is changing. Obviously sharing your work doesn’t automatically make it reliable, but since it is a costly signal, it is a good sign that you take the reliability of your work seriously.

You could share aspects of your work in many ways, but the OSF has many benefits

  • the OSF is backed by serious money & institutional support, so the online side of your project will be live many years after you publish the link
  • It integrates with various other platform (github, dropbox, the PsyArXiv preprint server)
  • Totally free, run for scientists by scientists as a non-profit

All this, and the OSF also makes easy things like version control and pre-registration.

Good science is open science. And the fringe benefit is that making materials open forces you to properly document everything, which makes you a better collaborator with your number one research partner – your future self.

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework

Pre-registration

Open Science essentials in 2 minutes, part 1

The Problem

As a scholarly community we allowed ourselves to forget the distinction between exploratory vs confirmatory research, presenting exploratory results as confirmatory, presenting post-hoc rationales as predictions. As well as being dishonest, this makes for unreliable science.

Flexibility in how you analyse your data (“researcher degrees of freedom“) can invalidate statistical inferences.

Importantly, you can employ questionable research practices like this (“p-hacking“) without knowing you are doing it. Decide to stop an analysis because the results are significant? Measure 3 dependent variables and use the one that “works”? Exclude participants who don’t respond to your manipulation? All justified in exploratory research, but mean you are exploring a garden of forking paths in the space of possible analysis – when you arrive at a significant result, you won’t be sure you got there because of the data, or your choices.

The solution

There is a solution – pre-registration. Declare in advance the details of your method and your analysis: sample size, exclusion conditions, dependent variables, directional predictions.

You can do this

Pre-registration is easy. There is no single, universally accepted, way to do it.

  • you could write your data collection and analysis plan down and post it on your blog.
  • you can use the Open Science Framework to timestamp and archive a pre-registration, so you can prove you made a prediction ahead of time.
  • you can visit AsPredicted.org which provides a form to complete, which will help you structure your pre-registration (making sure you include all relevant information).
  • Registered Reports“: more and more journals are committing to published pre-registered studies. They review the method and analysis plan before data collection and agree to publish once the results are in (however they turn out).

You should do this

Why do this?

  • credibility – other researchers (and journals) will know you predicted the results before you got them.
  • you can still do exploratory analysis, it just makes it clear which is which.
  • forces you to think about the analysis before collecting the data (a great benefit).
  • more confidence in your results.

Further reading

 

Addendum 14/11/17

As luck would have it, I stumbled across a bunch of useful extra resources in the days after publishing this post

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework

Cognitive Science Conference, Philadelphia

fitThis week, 10-13th August, I am a the Annual Cognitive Science Society Conference, in Philadelphia. While there I am presenting work which uses a large data set on chess players and their games.

Previously the phenomenon of ‘stereotype threat’ has been found in many domains where people’s performance suffers when they are made more aware of their identity as a member of a social group which is expected to perform poorly – for example there is a stereotype that men are better at maths, and stereotype threat has been reported for female students taking maths exams when their identity as a women is emphasised, even if only subtly (by asking them to declare their gender on the top of the exam paper, for example). This effect has been reported for Chess, which is heavily male dominated, especially among top players. However, the reports of stereotype threat in chess, like in many other domains, often rely on laboratory experiments with a small number of people (around or less than 100).

My data are more than 11 million games of chess: every tournament recorded with FIDE, the international chess authority, between 2008-2015. Using this data, I asked if it was possible to observe stereotype threat in this real world setting. If the phenomenon is real, however small it is, I should be able to observe it playing out in this data – the sheer number of games I can analyse allows me a very powerful statistical lens.

The answer is, no: there is no stereotype threat in international chess. To see how I determined this, and what I think it means, you can read the paper here, or see the Jupyter notebook which walks you through the key analysis. And if you’re at the Conference, come and visit the poster (as PDF, as PNG). Jeff Sonas, who was compiled the data, has been kind enough to allow me to make available a 10% sample of the data (still over 1 million games), and this, along with all the analysis code for the paper, is available via the Open Science Framework.

There’s lots more to come from this data – as well as analysing performance related effects, the data affords a fantastic opportunity to look at learning curves and try to figure out what affects how players’ performance changes over time.