Category: Research

Reproducibility

Open science essentials in 2 minutes, part 3

Let’s define it this way: reproducibility is when your experiment or data analysis can be reliably repeated. It isn’t replicability, which we can define as reproducing an experiment and subsequent analysis and getting qualitatively similar results with the new data. (These aren’t universally accepted definitions, but they are common, and enough to get us started).

Reproducibility is a bedrock of science – we all know that our methods section should contain enough detail to allow an independent researcher to repeat our experiment. With the increasing use of computational methods in psychology, there’s increasing need – and increasing ability – for us to share more than just a description of our experiment or analysis.

Reproducible methods

Using sites like the Open Science Framework you can share stimuli and other materials. If you use open source experiment software like PsychoPy or Tatool you can easily share the full scripts which run your experiment and people on different platforms and without your software licenses can still run your experiment.

Reproducible analysis

Equally important is making your analysis reproducible. You’d think that with the same data, another person – or even you in the future – would get the same results. Not so! Most analyses include thousands of small choices. A mis-step in any of these small choices – lost participants, copy/paste errors, mis-labeled cases, unclear exclusion criteria – can derail an analysis, meaning you get different results each time (and different results from what you’ve published).

Fortunately a solution is at hand! You need to use analysis software that allows you to write a script to convert your raw data into your final output. That means no more Excel sheets (no history of what you’ve done = very bad – don’t be these guys) and no more point-and-click SPSS analysis.

Bottom line: You must script your analysis – trust me on this one

Open data + code

You need to share and document your data and your analysis code. All this is harder work than just writing down the final result of an analysis once you’ve managed to obtain it, but it makes for more robust analysis, and allows someone else to reproduce your analysis easily in the future.

The most likely beneficiary is you – you most likely collaborator in the future is Past You, and Past You doesn’t answer email. Every analysis I’ve ever done I’ve had to repeat, sometimes years later. It saves time in the long run to invest in making a reproducible analysis first time around.

Further Reading

Nick Barnes: Publish your computer code: it is good enough

British Ecological Society: Guide to Reproducible Code

Gael Varoquaux : Computational practices for reproducible science

Advanced

Reproducible Computational Workflows with Continuous Analysis

Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research

Part of a series for graduate students in psychology.
Part 1: pre-registration.
Part 2: the Open-Science Framework.

Cross-posted at mindhacks.com

Written by Comments Off on Reproducibility Posted in Research

2017 review

Things that have consumed my attention in 2017…

Teaching & Public Engagement

At the beginning of the year I taught my graduate seminar class on cognitive neuroscience, and we reviewed Cordelia Fine’s “Delusions of Gender” and the literature on sex differences in cognition. I blogged about some of the topics covered (linked here), and gave talks about the topic at Leeds Beckett and the University of Sheffield. It’s a great example of a situation that is common to so much of psychology: strong intuitions guide interpretation as much as reliable evidence.

In the autumn I helped teach undergraduate cognitive psychology, and took part in the review of our entire curriculum as the lead of the “cognition stream”. It’s interesting to ask exactly what a psychology student should be taught about cognitive psychology over three years.

In January I have a lecture at the University of Greenwich on how cognitive science informs my teaching practice, which you can watch here: “Experiments in Learning”.

We organised a series of public lectures on psychology research, Mind Matters. These included Gustav Kuhn and Megan Freeth talking about the science of magic, and Sophie Scott (who gave this year’s Christmas Lectures at the Royal Institution) talking about the science of laughter. You can read about the full programme on the Mind Matters website. Joy Durrant did all the hard work for these talks – thanks Joy!

 

Research:

Using big data to test cognitive theories.

Our paper “Many analysts, one dataset: Making transparent how variations in analytical choices affect results” is now in press at (the new journal) Advances in Methods and Practices in Psychological Science. See previous coverage in Nature (‘Crowdsourced research: Many hands make tight work’) and 538 (Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for.). This paper is already more cited than many of mine which have been published for years.

On the way to looking at Chess players’ learning curves I got distracted by sex differences: surely, I thought, Chess would be a good domain to discover the controversial ‘stereotype threat’ effect? It turns out Female chess players outperform expectations when playing men (in press at Psychological Science).

Wayne Gray edited a special issue of Topics in Cognitive Science: Game XP: Action Games as Experimental Paradigms for Cognitive Science, which features our paper Testing sleep consolidation in skill learning: a field study using an online game..

I presented this work at a related symposium at CogSci17 in London (along with our work on learning in the game Destiny), at a Psychonomics Workshop in Madison, WI : Beyond the Lab: Using Big Data to Discover Principles of Cognition at a Pint of Science in Sheffield (video here).

Our map of Implicit Racial bias in europe sparked lots of discussion (and the article was read nearly 2 million times at The Conversation).

 

Trust and reason

I read Hugo Mercier and Dan Sperber’s ‘The Enigma of Reason: A New Theory of Human Understanding’ and it had a huge effect on me, influencing a lot of the new work I’ve been planning this year. (my review in the Times Higher here).

In April I went to a British Academy roundtable meeting on ‘Trust in Experts’. Presumably I was invited because of this research, why we don’t trust the experts. Again, this has influenced lots of future plans, but nothing to show yet.

Related, we have AHRC funding for our project Cyberselves: How Immersive Technologies Will Impact Our Future Selves. Come to the workshop on the effects of teleoperation and telepresence, in Oxford in February.

 

Decision making

Our Leverhulme project on implicit bias and blame wound up. Outputs in press or preparation:

My old Phd students Maria Panagiotidi, Angelo Pirrone and Cigar Kalfaoglu have also published papers, with me as co-author, making me look more prolific than I am. See the publications page.

 

The highlight of the year has been getting to speak to and work with so many generous, interesting, committed people. Thanks and best wishes to all.

Previous years’ reviews: 2016 review, 2015 review.

Funded PhD studentship

Funding is available for a PhD studentship in my department, based around a teaching fellowship. This means you’d get four years of funding but would be expected to help teach during your PhD.

Relevant suitability criteria include:

  • Being ready to start on 5th of February
  • Having completed an MSc with a Merit or Distinction
  • EU citizen
  • Background in psychology

Projects I’d like to supervise are here, including:

Analysing Big Data to understand learning (like this)

Online discussion: augmenting argumentation with chatbots (with Andreas Vlachos in Computer Science)

Improving skill learning (theory informed experiments!)

A PhD with me will involve using robust and open science methods to address theoretical ideas in cognitive science. Plus extensive mentoring on all aspects of the scholarly life, conducted in Sheffield’s best coffee shops.

Full details of the opportunity here. Deadline: 18th December. Get in touch!

The Open Science Framework

Open science essentials in 2 minutes, part 2

The Open Science Framework (osf.io) is a website designed for the complete life-cycle of your research project – designing projects; collaborating; collecting, storing and sharing data; sharing analysis scripts, stimuli, results and publishing results.

You can read more about the rationale for the site here.

Open Science is fast becoming the new standard for science. As I see it, there are two major drivers of this:

1. Distributing your results via a slim journal article dates from the 17th century. Constraints on the timing, speed and volume of scholarly communication no longer apply. In short, now there is no reason not to share your full materials, data, and analysis scripts.

2. The Replicability crisis means that how people interpret research is changing. Obviously sharing your work doesn’t automatically make it reliable, but since it is a costly signal, it is a good sign that you take the reliability of your work seriously.

You could share aspects of your work in many ways, but the OSF has many benefits

  • the OSF is backed by serious money & institutional support, so the online side of your project will be live many years after you publish the link
  • It integrates with various other platform (github, dropbox, the PsyArXiv preprint server)
  • Totally free, run for scientists by scientists as a non-profit

All this, and the OSF also makes easy things like version control and pre-registration.

Good science is open science. And the fringe benefit is that making materials open forces you to properly document everything, which makes you a better collaborator with your number one research partner – your future self.

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework
Written by Comments Off on The Open Science Framework Posted in Research

Pre-registration

Open Science essentials in 2 minutes, part 1

The Problem

As a scholarly community we allowed ourselves to forget the distinction between exploratory vs confirmatory research, presenting exploratory results as confirmatory, presenting post-hoc rationales as predictions. As well as being dishonest, this makes for unreliable science.

Flexibility in how you analyse your data (“researcher degrees of freedom“) can invalidate statistical inferences.

Importantly, you can employ questionable research practices like this (“p-hacking“) without knowing you are doing it. Decide to stop an analysis because the results are significant? Measure 3 dependent variables and use the one that “works”? Exclude participants who don’t respond to your manipulation? All justified in exploratory research, but mean you are exploring a garden of forking paths in the space of possible analysis – when you arrive at a significant result, you won’t be sure you got there because of the data, or your choices.

The solution

There is a solution – pre-registration. Declare in advance the details of your method and your analysis: sample size, exclusion conditions, dependent variables, directional predictions.

You can do this

Pre-registration is easy. There is no single, universally accepted, way to do it.

  • you could write your data collection and analysis plan down and post it on your blog.
  • you can use the Open Science Framework to timestamp and archive a pre-registration, so you can prove you made a prediction ahead of time.
  • you can visit AsPredicted.org which provides a form to complete, which will help you structure your pre-registration (making sure you include all relevant information).
  • Registered Reports“: more and more journals are committing to published pre-registered studies. They review the method and analysis plan before data collection and agree to publish once the results are in (however they turn out).

You should do this

Why do this?

  • credibility – other researchers (and journals) will know you predicted the results before you got them.
  • you can still do exploratory analysis, it just makes it clear which is which.
  • forces you to think about the analysis before collecting the data (a great benefit).
  • more confidence in your results.

Further reading

 

Addendum 14/11/17

As luck would have it, I stumbled across a bunch of useful extra resources in the days after publishing this post

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework
Written by Comments Off on Pre-registration Posted in Research

Cognitive Science Conference, Philadelphia

fitThis week, 10-13th August, I am a the Annual Cognitive Science Society Conference, in Philadelphia. While there I am presenting work which uses a large data set on chess players and their games.

Previously the phenomenon of ‘stereotype threat’ has been found in many domains where people’s performance suffers when they are made more aware of their identity as a member of a social group which is expected to perform poorly – for example there is a stereotype that men are better at maths, and stereotype threat has been reported for female students taking maths exams when their identity as a women is emphasised, even if only subtly (by asking them to declare their gender on the top of the exam paper, for example). This effect has been reported for Chess, which is heavily male dominated, especially among top players. However, the reports of stereotype threat in chess, like in many other domains, often rely on laboratory experiments with a small number of people (around or less than 100).

My data are more than 11 million games of chess: every tournament recorded with FIDE, the international chess authority, between 2008-2015. Using this data, I asked if it was possible to observe stereotype threat in this real world setting. If the phenomenon is real, however small it is, I should be able to observe it playing out in this data – the sheer number of games I can analyse allows me a very powerful statistical lens.

The answer is, no: there is no stereotype threat in international chess. To see how I determined this, and what I think it means, you can read the paper here, or see the Jupyter notebook which walks you through the key analysis. And if you’re at the Conference, come and visit the poster (as PDF, as PNG). Jeff Sonas, who was compiled the data, has been kind enough to allow me to make available a 10% sample of the data (still over 1 million games), and this, along with all the analysis code for the paper, is available via the Open Science Framework.

There’s lots more to come from this data – as well as analysing performance related effects, the data affords a fantastic opportunity to look at learning curves and try to figure out what affects how players’ performance changes over time.

Written by Comments Off on Cognitive Science Conference, Philadelphia Posted in Research

Why don’t we trust the experts?

During the EU referendum debate a friend of mine, who happens to be a Professor of European Law, asked in exasperation why so much of the country seems unwilling to trust experts on the EU. When we want a haircut, we trust the hairdresser. If we want a car fixed, we trust the mechanic. Now, when we need informed comment on EU, why don’t we trust people who have spent a lifetime studying the topic?

The question rattled around in my mind, until I realised I had actually done some research which provides part of the answer. During my post-doc with Dick Eiser we did a survey of people who lived on land which may have been contaminated with industrial pollutants. We asked people with houses on two such ‘brownfield’ sites, one in the urban north, and one in the urban south, who they trusted to tell them about the possible risks.

One group we asked about the perception of was scientists. The householders answered on scale which went from 1 to 5 (5 is the most trust). Here’s the distribution of answers:

trust_scientists

As you can see, scientists are highly trusted. Compare with the ratings of property developers:

trust_devs

We also asked our respondents about how they rated different communicators on various dimensions. One dimension was expertise about this topic. As you’d expect, scientists were rated as highly expert in the risks of possible brownfield pollution. We also asked people about whether they believed the different potential communicators of risks has their interests at heart, and whether they would be open with their beliefs about risks. With this information, it is possible, statistically, to analyse not just who is trusted, but why they are trusted.

The results, published in Eiser at al (2009), show that expertise is not the strongest determinate of who is trusted. Instead, people trust those who they believe have their best interests at heart. This is three to four times more important than perception of expertise (fig. 3 on p294 for those reading along with the paper in hand).

One way of making this clear is to pick out the people who have high trust in scientists (rating or 4 or 5), and compared them to people who have low trust (rating scientists a 1 or 2 for trust). The perceptions of their expertise differ, but not too much:

perc_expertiseEven those who don’t trust scientists recognise that they know about pollution risks. In other words, their actual expertise isn’t in question.

The difference is seen whether scientists are seen to have the householders’ interests at heart:

perc_interests

So those who didn’t trust the scientist tend to believe that the scientists don’t care about them.

The difference is made clear by one group that was highly trusted to communicate risks of brownfield land-  friends and family:

trust_frfam

Again, the same relationship between variables held. Trust in friends and family was driven more by a perception of shared interests than it was by perceptions of expertise. Remember, this isn’t a measure of generalised trust, but specifically of trust in their communications about pollution risks. Maybe your friends and family aren’t experts in pollution risks, but they surely have your best interests at heart, and that it why they are nearly as trusted on this topic as scientists, despite their lack of expertise.

So here we have a partial answer to why experts aren’t trusted. They aren’t trusted by people who feel alienated from them. My reading of this study would be that it isn’t that we live in a ‘post-fact’ political climate. Rather it is that attempts to take facts out of their social context won’t work. For me and my friends it seems incomprehensible to ignore the facts, whether about the science of vaccination, or the law and economics of leaving the EU. But me and my friends do very well from the status quo- the Treasury, the Bar, the University work well for us. We know who these people are, we know how they work, and we trust them because we feel they are working for us, in some wider sense. People who voted Leave do suffer from a lack of trust, and my best guess is that this is a reflection of a belief that most authorities aren’t on their side, not because they necessarily reject their status as experts.

The paper is written up as : Eiser, J. R., Stafford, T., Henneberry, J., & Catney, P. (2009). “Trust me, I’m a Scientist (Not a Developer)”: Perceived Expertise and Motives as Predictors of Trust in Assessment of Risk from Contaminated Land. Risk Analysis, 29(2), 288-297. I’ve just made the data and analysis for this post available here.

Written by Comments Off on Why don’t we trust the experts? Posted in Research

New paper: Improving training for sensory augmentation using the science of expertise.

A few years ago, we started work on a device we called “the tactile helmet”  (Bertram et al, 2013). This would, the plan was, help you navigate without sight, using ultrasound sensors to give humans an extended sense of touch. Virtual rat-whiskers!

As well as doing some basic testing with the device (Kerdergari et al, 2014), Craig and I also reviewed the existing literature on similar sensory augmentation devices.

What we found was that there are many such devices, with little consistency in how their effectiveness is assessed. Critically, for us, research reports neglected to consider the ease and extent of training with a device. So some devices have users who have practiced with the device for thousands of hours (even decades!), while the results from others are described with users who have little more than a few minutes familiarisation.

In our new paper, Improving training for sensory augmentation using the science of expertise, we review existing sensory augmentation devices with an eye on how users can be trained to use them effectively. We make recommendations for which features of training should be reported, so fair comparisons can be made across devices. These aspects of training also provide a natural focus for how training can be optimised (because for each them, as cogntive scientists, we know how they can be adjusted so as to enhance learning. Our features of training are:

  • The total training duration
  • Session duration and interval
  • Feedback
  • The similarity of training to end use

We discuss each of these in turn, with reference to the psychology literature on skill acquisition, as well as discussing non-training factors which affect device usability.

A post-print of the paper is available here:

References:

Bertram, C., & Stafford, T. (2016). Improving training for sensory augmentation using the science of expertise. Neuroscience & Biobehavioral Reviews, 68, 234-244.

Bertram, C., Evans, M. H., Javaid, M., Stafford, T., & Prescott, T. (2013). Sensory augmentation with distal touch: the tactile helmet project. In Biomimetic and Biohybrid Systems (pp. 24-35). Springer Berlin Heidelberg.

Kerdegari, H., Kim, Y., Stafford, T., & Prescott, T. J. (2014). Centralizing bias and the vibrotactile funneling illusion on the forehead. In Haptics: Neuroscience, Devices, Modeling, and Applications (pp. 55-62). Springer Berlin Heidelberg.

Written by Comments Off on New paper: Improving training for sensory augmentation using the science of expertise. Posted in Research

Crowdsourcing analysis, an alternative approach to scientific research

Crowdsourcing analysis, an alternative approach to scientific research: Many Hands make tight work

Guest Lecture by Raphael Silberzahn, IESE Business School, University of Navarra

11:00 – 12:00, 9th of December, 2015

Lecture Theatre 6, The Diamond (32 Leavygreave Rd, Sheffield S3 7RD)

Is soccer players’ skin colour associated with how often they are shown a red card? The answer depends on how the data is analysed. With access to a dataset capturing the player-referee interactions of premiership players from the 2012-13 season in the English, German, French and Spanish leagues we organised a crowdsourced research project involving 29 different research teams and 61 individual researchers. Teams initially exchanged analytical approaches — but not results — and incorporated feedback from other teams into their analyses. Despite, the teams came to a broad range of conclusions. The overall group consensus (that a correlation exists) was much more tentative than would be expected from a single-team analysis. Raphael Silberzahn will provide insights from his perspective as one of the project coordinators and Tom Stafford will speak about his experience as a participant in this project. We will discuss how also smaller research projects can benefit from bringing together teams of skilled researchers to work simultaneously on the same data and thereby balance discussions and provide scientific findings with greater validity.

Links to coverage of this research in Nature (‘Crowdsourced research: Many hands make tight work’), and on FiveThirtyEight (‘Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for’). Our group’s analysis was supported by some great data exploration and visualisation work led by Mat Evans. You can see an interactive notebook of this work here

 

Written by Comments Off on Crowdsourcing analysis, an alternative approach to scientific research Posted in Events, Research

Bias mitigation

200px-Unbalanced_scales2.svgOn Friday I gave a talk on cognitive and implicit biases, to a group of employment tribunal judges. The judges were a great audience, far younger, more receptive and more diverse than my own prejudices had led me to expect, and I enjoyed the opportunity to think about the area of cognitive bias, and how some conclusions from that literature might be usefully carried over to the related area of implicit bias.

First off, let’s define cognitive bias versus implicit bias. Cognitive bias is a catch all term for systematic flaws in thinking. The phrase is associated with the ‘Judgement and decision making’ literature which was spearheaded by Daniel Kahneman and colleagues (and for which he received the Nobel Prize in 2002). Implicit bias, for our purposes, refers to a bias in judgements of other people which is both unduly influenced by social categories such as sex or ethnicity and in which the person making this biased judgement is either unaware or unable to control the undue influence.

So from the cognitive bias literature we get a menagerie of biases such as ‘the overconfidence effect‘, ‘confirmation bias‘, ‘anchoring‘, ‘base rate neglect‘, and on and on. From implicit bias we get findings such as that maths exam papers are marked higher when they carry a male name on the top, job applicants with stereotypically black American names have to send out twice as many CVs, on average, to get an interview or that people sit further away from someone they believe has a mental health condition such as schizophrenia. Importantly all these behaviours are observed in individuals who insist that they are not only not sexist/racist/prejudiced but are actively anti-sexism/racism/prejudice.

My argument to the judges boiled down to four key points, which I think build on one another:

1. Implicit biases are cognitive biases

There is slippage in how we identify cognitive biases compared to how we identify implicit biases. Cognitive biases are defined against a standard of rationality – either we know the correct answer (as in the Wason selection task, for example), or we feel able to define irrelevant factors which shouldn’t affect a decision (as in the framing effect found with the ‘Asian Disease problem‘). Implicit biases use the second, contrastive, standard. Additionally it is unclear whether the thing being violated is a standard of rationality, or a standard of equity. So, for example, it is unjust to allow the sex of a student influence their exam score, but is it irrational? (If you think there is a clear answer to this, either way, then you are more confident of the ultimate definition of rationality than a full century of scholars).

Despite these differences, implicit biases can usefully be thought of as a kind of cognitive bias. They are a habit of thought, which produces systematic errors, and which we may be unaware we are deploying (although elsewhere I have argued that the evidence for the unconscious nature of these process is over-egged). Once you start to think of implicit biases and cognitive biases as very similar, it buys some important insights.

Specifically:

2. Biases are integral to thinking

Cognitive biases exist for a reason. They are not rogue processes which contaminate what would be otherwise intelligent thought. They are the foundation of intelligent thought. To grasp this, you need to appreciate just how hard principled, consistent thought is. In a world of limited time, information, certainty and intellectual energy cognitive biases arise from necessary short-cuts and assumptions which keep our intellectual show on the road. Time and time again psychologists have looked at specific cognitive biases and found that there is a good reason for people to make that mistake. Sometimes they even find that animals make that mistake, demonstrating that even without the human traits of pride, ideological confusion and general self-consciousness the error persists – suggesting that there are good evolutionary reasons for it to exist.

For an example, take confirmation bias. Although there are risks to preferring to seek information that confirms whatever you already believe, the strategy does provide a way of dealing with complex information, and a starting point (i.e. what you already suspect) which is as good as any other starting point. It doesn’t require that you speculate endless about what might be true, and in many situations the world (or other people) is more than likely to put contradictory evidence in front of you without you having to expend effort in seeking it out. Confirmation bias exists because it is an efficient information seeking strategy – certainly more efficient than constantly trying to disprove every aspect of what you believe.

Implicit biases concern social judgement and socially significant behaviours, but they also seem to share a common mechanism. In cognitive terms, implicit biases arise from our tendency towards associative thoughts – we pick up on things which co-occur, and have the tendency to make judgements relying on these associations, even if strict logic does not justify it. The scope of how associations are created and strengthened in our minds is beyond the scope of the post.

For now it is clear that making judgements based on circumstantial evidence is unjustified but practical. An uncontentious example might be you get sick after eating at a particular noodle bar. Maybe it was bad luck, you were going to get sick anyway or it was the sandwich you ate a lunch, but the odds are good you’ll avoid the noodle bar in the future. Why chance it, there are plenty of other restaurants? It would be impractical to never make some assumptions, and the assumption-laden (biased!) route offers a practical solution to the riddle of what you should conclude from your food poisoning.

3. There is no bias-free individual

Once you realise that our thinking is built on many fast, assumption-making, processes which may not be perfect – indeed which have systematic tendencies which produce the errors we identify as cognitive bias – you then realise that it would be impossible to have bias-free decision processes. If you want to make good choices today rather than a perfect choices in the distant future, you have to compromise and accept decisions which will have some biases in them. You cannot free yourself of bias, in this sense, and you shouldn’t expect to.

This realisation encourages some humility in the face of cognitive bias. We all have biases, and we shouldn’t pretend that we don’t or hope that we can free ourselves of them.

We can be aware of the biases we are exposed to and likely to harbour within ourselves. We can, with a collective effort, change the content of the biases we foster as a culture. We can try hard to identify situations where bias may play a larger role, or identify particular biases which are latent in our culture or thinking. We can direct our bias mitigation efforts at particularly important decisions, or decisions we think are particularly likely to be prone to bias. But bias-free thinking isn’t an option, it is part of who we are.

4. Many effective mitigation strategies will be supra-personal:

If humility in the face of bias is the first practical reaction to the science of cognitive bias, I’d argue that second is to recognise that bias isn’t something you can solve on your own at a personal psychological level. Obviously you have to start by trying your honest best to be clear-headed and reasonable, but all the evidence suggests that biases will persist, that they cannot be cut out of thinking and may even thrive when we think ourselves most objective.

The solution is to embed yourself in groups, procedures and institutions which help counter-act bias. Obviously, to a large extent, the institutions of law have evolved to counter personal biases. It would be an interesting exercise to review how legal cases are conducted from a psychological perspective, interpreting different features as to how they work with or against our cognitive tendencies (so, for example, the adversarial system doesn’t get rid of confirmation bias, but it does mean that confirmation bias is given equal and opposite opportunity to work in the minds of the two advocates).

Amongst other kinds of ‘ecological control‘ we might count proper procedure (following the letter of the law, checklists, etc), control of (admissible) information and the systematic collection of feedback (without which you may not ever come to realise that you are making systematically biased decisions).

Slides from my talk here as Google docs slides and as PDF. Thanks to Robin Scaife for comments on a draft of this post. Cross-posted to the blog of our Leverhulme trust funded project on “Bias and Blame“.

Written by Comments Off on Bias mitigation Posted in Research