Teaching: “How reliable is cognitive neuroscience?”

This spring I taught my MSc module ‘PSY6316 Current Issues in Cognitive Neuroscience’ on the topic “How reliable is cognitive neuroscience?”. Here’s the module outline:

What has been called The Replication Crisis has sparked widespread introspection about the standards and protocols of science, particularly within the behavioural sciences. This course, though reading a series of landmark papers and class discussion, will consider to extent to which doubts about the reliability of empirical work affect cognitive neuroscience. Can we trust the published papers in this field? Are the effects which we investigate reliable? If not, how can work in cognitive neuroscience be made more trustworthy?

The basic idea was to read material on robust science and scandals of unreliablity in psychology, and ask the students to consider the extent these applied to cognitive neuroscience.

If asked students before they took the course, and after, a set of questions by anonymous questionnaire. The responses indicate that the course did at least induce some skepticism in the students:

Here are their before-vs-after responses to the statement ‘If you read about a finding that has been demonstrated across multiple papers in multiple journals by multiple authors, how likely do you think that finding is to be reliable?’

Here the responses for ”If PSYCHOLOGY continues as it has, significant progress will be made in understanding in the next 50 years’  and ‘If COGNITIVE NEUROSCIENCE continues as it has, significant progress will be made in understanding in the next 50 years’.

Note that optimism is reduced for both fields, but started higher for cognitive neuroscience (perhaps unsurprising since many of the students are on the cogneuro MSc).

The full list of questions I asked, the responses and the plots are available here. Most importantly maybe, the reading list is also available, which contains landmark papers on replicability/reproducibility in psychology, as well as relevant readings concerning reliability in neuroimaging.

I have always run this course as a discussion class rather than lecture class, and I have always based it around controversies in cognitive neuroscience. Last year it was ‘sex differences in the brain‘. You can read a bit more about the thinking behind the course in

Stafford, T. (2008). A fire to be lighted: a case-study in enquiry based learning. Practice and Evidence of the Scholarship of Teaching and Learning in Higher Education, 3(1), 20-42.

Related: materials from the “Open Science and Robust Research Practices” symposium held in Sheffield on 7/6/18.

 

Written by Comments Off on Teaching: “How reliable is cognitive neuroscience?” Posted in Teaching

Symposium on Robust Research Practices

Mate Gyurkovics has organised a Symposium on Robust Research Practices at the University of Sheffied on 7th of June 2018. There is a fantastic speaker line up and you can register to attend (for free!) using this link : https://goo.gl/forms/Ao3pIjYvLldOD54O2

Topics will include open science as a measure to include quality control; the advantages of registered reports and pre-prints, and statistical issues (e.g., concerning the p-value) and potential alternatives.

Speakers: Dr Marcus Munafo (Bristol), Dr Chris Chambers (Cardiff), Dr Kate Button (Bath), Dr Hannah Hobson (Greenwich), Dr Verena Heise (Oxford), and Dr Lewis Halsey (Roehampton).

Date: Thursday, 7th June, 2018

Time: 10:30 to 17:00.

Venue: The Diamond, LT 8, University of Sheffield

Update: materials from the symposium now available here

Written by Comments Off on Symposium on Robust Research Practices Posted in events

Reproducibility

Open science essentials in 2 minutes, part 3

Let’s define it this way: reproducibility is when your experiment or data analysis can be reliably repeated. It isn’t replicability, which we can define as reproducing an experiment and subsequent analysis and getting qualitatively similar results with the new data. (These aren’t universally accepted definitions, but they are common, and enough to get us started).

Reproducibility is a bedrock of science – we all know that our methods section should contain enough detail to allow an independent researcher to repeat our experiment. With the increasing use of computational methods in psychology, there’s increasing need – and increasing ability – for us to share more than just a description of our experiment or analysis.

Reproducible methods

Using sites like the Open Science Framework you can share stimuli and other materials. If you use open source experiment software like PsychoPy or Tatool you can easily share the full scripts which run your experiment and people on different platforms and without your software licenses can still run your experiment.

Reproducible analysis

Equally important is making your analysis reproducible. You’d think that with the same data, another person – or even you in the future – would get the same results. Not so! Most analyses include thousands of small choices. A mis-step in any of these small choices – lost participants, copy/paste errors, mis-labeled cases, unclear exclusion criteria – can derail an analysis, meaning you get different results each time (and different results from what you’ve published).

Fortunately a solution is at hand! You need to use analysis software that allows you to write a script to convert your raw data into your final output. That means no more Excel sheets (no history of what you’ve done = very bad – don’t be these guys) and no more point-and-click SPSS analysis.

Bottom line: You must script your analysis – trust me on this one

Open data + code

You need to share and document your data and your analysis code. All this is harder work than just writing down the final result of an analysis once you’ve managed to obtain it, but it makes for more robust analysis, and allows someone else to reproduce your analysis easily in the future.

The most likely beneficiary is you – you most likely collaborator in the future is Past You, and Past You doesn’t answer email. Every analysis I’ve ever done I’ve had to repeat, sometimes years later. It saves time in the long run to invest in making a reproducible analysis first time around.

Further Reading

Nick Barnes: Publish your computer code: it is good enough

British Ecological Society: Guide to Reproducible Code

Gael Varoquaux : Computational practices for reproducible science

Advanced

Reproducible Computational Workflows with Continuous Analysis

Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research

Part of a series for graduate students in psychology.
Part 1: pre-registration.
Part 2: the Open-Science Framework.

Cross-posted at mindhacks.com

Written by Comments Off on Reproducibility Posted in Research

2017 review

Things that have consumed my attention in 2017…

Teaching & Public Engagement

At the beginning of the year I taught my graduate seminar class on cognitive neuroscience, and we reviewed Cordelia Fine’s “Delusions of Gender” and the literature on sex differences in cognition. I blogged about some of the topics covered (linked here), and gave talks about the topic at Leeds Beckett and the University of Sheffield. It’s a great example of a situation that is common to so much of psychology: strong intuitions guide interpretation as much as reliable evidence.

In the autumn I helped teach undergraduate cognitive psychology, and took part in the review of our entire curriculum as the lead of the “cognition stream”. It’s interesting to ask exactly what a psychology student should be taught about cognitive psychology over three years.

In January I have a lecture at the University of Greenwich on how cognitive science informs my teaching practice, which you can watch here: “Experiments in Learning”.

We organised a series of public lectures on psychology research, Mind Matters. These included Gustav Kuhn and Megan Freeth talking about the science of magic, and Sophie Scott (who gave this year’s Christmas Lectures at the Royal Institution) talking about the science of laughter. You can read about the full programme on the Mind Matters website. Joy Durrant did all the hard work for these talks – thanks Joy!

 

Research:

Using big data to test cognitive theories.

Our paper “Many analysts, one dataset: Making transparent how variations in analytical choices affect results” is now in press at (the new journal) Advances in Methods and Practices in Psychological Science. See previous coverage in Nature (‘Crowdsourced research: Many hands make tight work’) and 538 (Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.). This paper is already more cited than many of mine which have been published for years.

On the way to looking at Chess players’ learning curves I got distracted by sex differences: surely, I thought, Chess would be a good domain to discover the controversial ‘stereotype threat’ effect? It turns out Female chess players outperform expectations when playing men (in press at Psychological Science).

Wayne Gray edited a special issue of Topics in Cognitive Science: Game XP: Action Games as Experimental Paradigms for Cognitive Science, which features our paper Testing sleep consolidation in skill learning: a field study using an online game..

I presented this work at a related symposium at CogSci17 in London (along with our work on learning in the game Destiny), at a Psychonomics Workshop in Madison, WI : Beyond the Lab: Using Big Data to Discover Principles of Cognition at a Pint of Science in Sheffield (video here).

Our map of Implicit Racial bias in europe sparked lots of discussion (and the article was read nearly 2 million times at The Conversation).

 

Trust and reason

I read Hugo Mercier and Dan Sperber’s ‘The Enigma of Reason: A New Theory of Human Understanding’ and it had a huge effect on me, influencing a lot of the new work I’ve been planning this year. (my review in the Times Higher here).

In April I went to a British Academy roundtable meeting on ‘Trust in Experts’. Presumably I was invited because of this research, why we don’t trust the experts. Again, this has influenced lots of future plans, but nothing to show yet.

Related, we have AHRC funding for our project Cyberselves: How Immersive Technologies Will Impact Our Future Selves. Come to the workshop on the effects of teleoperation and telepresence, in Oxford in February.

 

Decision making

Our Leverhulme project on implicit bias and blame wound up. Outputs in press or preparation:

My old Phd students Maria Panagiotidi, Angelo Pirrone and Cigar Kalfaoglu have also published papers, with me as co-author, making me look more prolific than I am. See the publications page.

 

The highlight of the year has been getting to speak to and work with so many generous, interesting, committed people. Thanks and best wishes to all.

Previous years’ reviews: 2016 review, 2015 review.

Cyberselves: How Immersive Technologies Will Impact Our Future Selves

robodogWe’re happy to announce the re-launch of our project ‘Cyberselves: How Immersive Technologies Will Impact Our Future Selves’. Straight out of Sheffield Robotics, the project aims to explore the effects of technology like robot avatars, virtual reality, AI servants and other tech which alters your perception or ability to act. We’re interested in work, play and how our sense of ourselves and our bodies is going to change as this technology becomes more and more widespread.

We’re funded by the AHRC to run workshops and bring our roadshow of hands on cyber-experiences to places across the UK in the coming year. From the website:

Cyberselves will examine the transforming impact of immersive technologies on our societies and cultures. Our project will bring an immersive, entertaining experience to people in unconventional locations, a Cyberselves Roadshow, that will give participants the chance to transport themselves into the body of a humanoid robot, and to experience the world from that mechanical body. Visitors to the Roadshow will also get a chance to have hands-on experiences with other social robots, coding and virtual/augmented reality demonstrations, while chatting to Sheffield Robotics’ knowledgeable researchers.

The project is a follow-up to our earlier AHRC project, ‘Cyberselves in Immersive Technologies‘, which brought together robotics engineers, philosophers, psychologists, scholars of literature, and neuroscientists.

We’re running a workshop on the effects of teleoperation and telepresence, in Oxford in February (Link).

Call for papers: symposium on AI, robots and public engagement at 2018 AISB Convention (April 2018).

Project updates on twitter, via Dreaming Robots (‘Looking at robots in the news, films, literature and the popular imagination’).

Cross-posted at mindhacks.com

Written by Comments Off on Cyberselves: How Immersive Technologies Will Impact Our Future Selves Posted in Projects

Funded PhD studentship

Funding is available for a PhD studentship in my department, based around a teaching fellowship. This means you’d get four years of funding but would be expected to help teach during your PhD.

Relevant suitability criteria include:

  • Being ready to start on 5th of February
  • Having completed an MSc with a Merit or Distinction
  • EU citizen
  • Background in psychology

Projects I’d like to supervise are here, including:

Analysing Big Data to understand learning (like this)

Online discussion: augmenting argumentation with chatbots (with Andreas Vlachos in Computer Science)

Improving skill learning (theory informed experiments!)

A PhD with me will involve using robust and open science methods to address theoretical ideas in cognitive science. Plus extensive mentoring on all aspects of the scholarly life, conducted in Sheffield’s best coffee shops.

Full details of the opportunity here. Deadline: 18th December. Get in touch!

The Open Science Framework

Open science essentials in 2 minutes, part 2

The Open Science Framework (osf.io) is a website designed for the complete life-cycle of your research project – designing projects; collaborating; collecting, storing and sharing data; sharing analysis scripts, stimuli, results and publishing results.

You can read more about the rationale for the site here.

Open Science is fast becoming the new standard for science. As I see it, there are two major drivers of this:

1. Distributing your results via a slim journal article dates from the 17th century. Constraints on the timing, speed and volume of scholarly communication no longer apply. In short, now there is no reason not to share your full materials, data, and analysis scripts.

2. The Replicability crisis means that how people interpret research is changing. Obviously sharing your work doesn’t automatically make it reliable, but since it is a costly signal, it is a good sign that you take the reliability of your work seriously.

You could share aspects of your work in many ways, but the OSF has many benefits

  • the OSF is backed by serious money & institutional support, so the online side of your project will be live many years after you publish the link
  • It integrates with various other platform (github, dropbox, the PsyArXiv preprint server)
  • Totally free, run for scientists by scientists as a non-profit

All this, and the OSF also makes easy things like version control and pre-registration.

Good science is open science. And the fringe benefit is that making materials open forces you to properly document everything, which makes you a better collaborator with your number one research partner – your future self.

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework
Written by Comments Off on The Open Science Framework Posted in Research

Pre-registration

Open Science essentials in 2 minutes, part 1

The Problem

As a scholarly community we allowed ourselves to forget the distinction between exploratory vs confirmatory research, presenting exploratory results as confirmatory, presenting post-hoc rationales as predictions. As well as being dishonest, this makes for unreliable science.

Flexibility in how you analyse your data (“researcher degrees of freedom“) can invalidate statistical inferences.

Importantly, you can employ questionable research practices like this (“p-hacking“) without knowing you are doing it. Decide to stop an analysis because the results are significant? Measure 3 dependent variables and use the one that “works”? Exclude participants who don’t respond to your manipulation? All justified in exploratory research, but mean you are exploring a garden of forking paths in the space of possible analysis – when you arrive at a significant result, you won’t be sure you got there because of the data, or your choices.

The solution

There is a solution – pre-registration. Declare in advance the details of your method and your analysis: sample size, exclusion conditions, dependent variables, directional predictions.

You can do this

Pre-registration is easy. There is no single, universally accepted, way to do it.

  • you could write your data collection and analysis plan down and post it on your blog.
  • you can use the Open Science Framework to timestamp and archive a pre-registration, so you can prove you made a prediction ahead of time.
  • you can visit AsPredicted.org which provides a form to complete, which will help you structure your pre-registration (making sure you include all relevant information).
  • Registered Reports“: more and more journals are committing to published pre-registered studies. They review the method and analysis plan before data collection and agree to publish once the results are in (however they turn out).

You should do this

Why do this?

  • credibility – other researchers (and journals) will know you predicted the results before you got them.
  • you can still do exploratory analysis, it just makes it clear which is which.
  • forces you to think about the analysis before collecting the data (a great benefit).
  • more confidence in your results.

Further reading

 

Addendum 14/11/17

As luck would have it, I stumbled across a bunch of useful extra resources in the days after publishing this post

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework
Written by Comments Off on Pre-registration Posted in Research

Seminar: Framing Effects in the Field: Evidence from Two Million Bets

Seminar announcement

Framing Effects in the Field: Evidence from Two Million Bets

Friday 8th of December, 1pm, The Diamond LT2

Alasdair Brown, School of Economics, UEA

Abstract: Psychologists and economists have often found that risky choices can be affected by the way that the gamble is presented or framed.  We analyse two million tennis bets over a 6 year period to analyse 1) whether frames are important in a real high-stakes environment, and 2) whether individuals pay a premium in order to avoid certain frames.  In this betting market, the same asset can be traded at two different prices at precisely the same time.  The only difference is the way that the two bets are framed.  The fact that these isomorphic bets arise naturally allows us to examine a scale of activity beyond even the most well-funded experiments.  We find that bettors make frequent mistakes, choosing the worse of the two bets in 29% of cases.  Bettors display a (costly) aversion to the framing of bets as high risk, but there is little evidence of loss aversion.  This suggests that individuals are indeed susceptible to framing manipulations in real-world situations, but not in the way predicted by prospect theory.

Part of the Psychology department seminar series. Tom Stafford is the host.

Please contact me if you’d like to meet with Alasdair.

Written by Comments Off on Seminar: Framing Effects in the Field: Evidence from Two Million Bets Posted in Uncategorized

2016 review

Research. Theme #1: Decision making: Most of the work I’ve done this year hasn’t yet seen the light of day. Our Michael J Fox Foundation funded project using typing as a measure of the strength of habitual behaviour in Parkinson’s Disease continues, and we’ll finish the data analysis next month. Likewise, we should also soon finish the analysis on our project ‘Neuroimaging as a marker of Attention Deficit Hyperactivity Disorder (ADHD)’. Angelo successfully passed his viva (thesis title: “Decision modelling insights in cognition and adaptive decision making”) and takes up a fellowship at Peking University in 2017 (well done Angelo!).

This thread of work, which is concerned with the neural and mechanistic basis of decision making, informs the ‘higher-level’ work I do on decision making, which is preoccupied with bias in decision making and how to address it. This work, done with Jules Holroyd and Robin Scaife, has focussed on the idea of ‘implicit bias‘, and what might be done about it. As well as running experiments and doing conceptual analysis, we’ve been developing an intervention on cognitive and implicit bias, which summarises the current state of research and gives some practical advice on avoiding bias in decision making. I’ve done a number of these sessions with judges, which has been a humbling experience: to merely study decision making and then be confronted with a room of professionals who dedicate their time to actually making fair decisions. As with the other projects, much more on this work will hopefully see the light in 2017.

World events have made studying decision making to understand better decisions seem more and more relevant. Here’s a re-analysis of some older data which I completed following the UK’s referendum on leaving the EU in June: Why don’t we trust the experts? (and, relatedly, my thoughts on being a European scholar). Also on this topic, a piece for The ConversationHow to check if you’re in a news echo chamber – and what to do about it.

Journal publications on decision making:

Holroyd, J., Scaife, R., Stafford, T. (in press). Responsibility for Implicit Bias. Philosophy Compass.
Pirrone, A., Azab, H., Hayden, B.Y., Stafford, T. and Marshall, J.A.R. (in press). Evidence for the speed-value trade-off: human and monkey decision making is magnitude sensitive. Decision
Panagiotidi, M., Overton, P.G., Stafford, T. (in press). Attention Deficit Hyperactivity Disorder-like traits and distractibility in the visual periphery. Perception.
Pirrone, A., Dickinson, A., Gomez, R., Stafford, T. and Milne, E. (in press). Understanding perceptual judgement in autism spectrum disorder using the drift diffusion model. Neuropsychology.
Bednark J., Reynolds J., Stafford T., Redgrave P. and Franz E. (2016). Action experience and action discovery in medicated individuals with Parkinson’s disease. Frontiers in Human Neuroscience, 10, 427. DOI 10.3389/fnhum.2016.00427.
Lu, Y., Stafford, T., & Fox, C. (2016). Maximum saliency bias in binocular fusion. Connection Science, 28(3),258-269.

(catch up on all publications on my scholarly publications page)

 

Research. Theme #2: Skill and learning

My argument is that games provide a unique data set where participants engage in profound skill acquisition AND the complete history of their skill development is easily recorded. To this end, I’ve several projects analysing data from games. This new paper : Stafford, T. & Haasnoot, E. (in press). Testing sleep consolidation in skill learning: a field study using an online game. Topics in Cognitive Science. (data + code) is an example of the new kinds of analysis – as well as the new results – which large data from games allow. The paper is an advance on our first work on this data (Stafford & Dewar, 2014), and is a featured project at the Centre for Data on the Mind. I gave a talk about this work at a workshop ‘Innovations in online learning environments: intrapersonal perspectives‘, for which there is video (view here: Factors influencing optimal skill learning: data from a simple online game).

I have been analysing a large dataset of chess games (11 million + games) and presented initial work on this at the Cognitive Science Conference. You can read the paper or see the code, results and commentary in an integrated Jupyter notebook (these are the future). There’s lots more exciting stuff to come out of this data!

Our overview of how the science of skill acquisition can inform development of sensory protheses came out: Bertram, C., & Stafford, T. (2016). Improving training for sensory augmentation using the science of expertise. Neuroscience & Biobehavioral Reviews, 68, 234-244 (Talk slides, lay summary).

Also: I wrote for The Conversation about an important review of the literature on the benefits of Brain Training, and I had a great summer student looking at the expertise acquired by Candy Crush players.

 

Teaching & thinking about teaching: Not as much to report as last year, since I had teaching leave for the autumn semester, as part of our Leverhulme project on bias and blame. At the beginning of the year I taught a graduate discussion class on dual-process theories in psychology and neuroscience, which was very worthwhile, but didn’t leave much digital trace. Whilst I’ve not been teaching classes, I have been thinking about teaching, publishing this in The Guardian: The way you’re revising may let you down in exams – and here’s why (my third piece in the G on learning), this on NPJ ‘Science of Learning’ Community: Do students know what’s good for them? (I’m proud of this one, mainly for the quality of the outgoing links it includes), and this, for The Conversation, on a under-noted consequence of testing in education: Good tests make children fail – here’s why.

I also used some informal platforms (i.e. blogging etc) to produce some guidance for psychology students: This on what I call the Hierarchy of critique, and this on the logic of student experiment reports, and I tried to provoke some discussion around this : I don’t read students’ drafts. Should I?

I did some talks for graduate students (follow the links for slides): Adventures in research blogging and Expanding your writing portfolio.

 

Peer reviewing: I feel this should be recorded somewhere, since peer reviewing is a part of an academic’s job which requires the pinnacle of their expertise and experience, yet is generally unrecognised and unrewarded. This year I helped the scholarly community out by doing grant reviews for the Medical Research Council and the Biotechnology and Biological Sciences Research Council and manuscript reviews for Trends in Cognitive Sciences, Memory and Cognition, Connection Science, Canadian Journal of Philosophy, Journal of European Psychology Students, International Journal of Communication and the Annual Cognitive Science Society conference. From 1st of January I will only be reviewing papers which make their data freely available, as part of the Peer Reviewers’ Openness Initiative.

 

That’s mostly it, bar a few things I couldn’t fit under these four headlines. Thanks to everyone who helped with the work in 2016 – getting to talk, write and pursue ideas with sincere, intelligent, kind and interesting people is the best part of the job.

(Previously: 2015 review)

Written by Comments Off on 2016 review Posted in events