New PhD Student: Angelo Pirrone

Angelo joins us in the Department, to run experimental studies of decision making. He is second supervised by James Marshall who is a Reader in Computer Science, and head of the Behavioural and Evolutionary Theory Lab. Angelo’s funding comes from the cross-disciplinary Neuroeconomics network I lead: “Decision making under uncertainty: brains, swarms and markets”. We’re hoping to use computational, neuroscientific and evolutionary perspectives to guide the development of behavioural studies of perceptual decision making. More about this, and the neuroeconomics network, soon. In the meantime – welcome to Sheffield, Angelo!

Update August 2016: Well, that went quick! Angelo is writing up and looking for post-doctoral positions. His CV is here

New paper: No learning where to go without first knowing where you’re coming from: action discovery is trajectory, not endpoint based.

We’ve a new paper out in Frontier in Cognitive Science: No learning where to go without first knowing where you’re coming from: action discovery is trajectory, not endpoint based. This was work done by Martin and Tom Walton as part of the IM-CLeVeR project.

The research uses our joystick task (Stafford et al, 2013) to look at how people learn a novel arbitrary action (in this case moving the joystick to a particular position). By comparing a condition (A) where the start point of the movement is always the same with a condition (B) where the start point moves around, we are able to look at the way people find it easiest to learn novel actions. In condition (A) you could learn the correct action my identifying the target location OR you could lean the correct action by identifying a target trajectory to make (which, since you always start from the same place, would work just as well to get you to the target location). In condition (B) you can’t rely on this second strategy, you have to identify the target location and head towards it from wherever you start. Surprisingly, participants in our experiment were very bad at this second condition – so much so that over the number of trials we gave them, they didn’t appear to learn anything about the target location and so acquired no novel action. This suggests that we have strong bias to rely on trajectories of movement when acquiring novel actions, rather code them by arbitrary spatial end points.

The paper: Thirkettle, M., Walton, T., Redgrave, P., Gurney, K., & Stafford, T. (2013). No learning where to go without first knowing where you’re coming from: action discovery is trajectory, not endpoint based. Frontiers in Cognitive Science, 4:, 638. doi:10.3389/fpsyg.2013.00638

The paper is published as part of our Special Topic in Frontiers on Intrinsic motivations and open-ended development in animals, humans, and robots

Stafford, T., Thirkettle, M., Walton, T., Vautrelle, N., Hetherington, L., Port, M., Gurney, K.N., Redgrave, P. (2012), A Novel Task for the Investigation of Action Acquisition, PLoS One, 7(6), e37749.

New paper: The Discovery of Novel Actions Is Affected by Very Brief Reinforcement Delays and Reinforcement Modality

In this paper, in press at the Journal of Motor Behaviour, we build on our previous work which developed a novel task for investigating how we learn actions. Our interest is in how the motor system connects what we’ve been doing with what happens. When something you do causes a change in the world you want to identify what exactly it was that you did that had the effect. Our hypothesis is that the machinery of the subcortical basal ganglia does this job for us – in the domain of motor learning. One key feature of the basal ganglia architecture is the speed with which dopamine signalling responds to external events. Profs Redgrave and Gurney have argued that this rapidity is because even millisecond delays in event signalling lead to a disporportunate increase in the difficulty of connecting the correct part of what you’ve done with the event. In other words, with delay you easily lose track of what it was that you did that caused a surprising outcome.

This is the context for the experiments reported in the new paper. These experiments show that our task has a very high sensitivity to delay – of the order of 100 ms. This is fits with the Redgrave-Gurney theory of dopamine signalling, and is considerably briefer than previous work looking at the effects of delay on motor learning. This is because, we argue, previous work uses response frequency (of an already learnt action) as the dependent variable, whereas our task is better designed to look at the emergence of new actions as they are in the process of being learn.

Here’s the abstract:

The authors investigated the ability of human participants to discover novel actions under conditions of delayed reinforcement. Participants used a joystick to search for a target indicated by visual or auditory reinforcement. Reinforcement delays of 75–150 ms were found to significantly impair action acquisition. They also found an effect of modality, with acquisition superior with auditory feedback. The duration at which delay was found to impede action discovery is, to the authors’ knowledge, shorter than that previously reported from work with operant and causal learning paradigms. The sensitivity to delay reported, and the difference between modalities, is consistent with accounts of action discovery that emphasize the importance of a time stamp in the motor record for solving the credit assignment problem.

And the citation:

Walton, T., Thirkettle, M., Redgrave, P., Gurney, K. N., & Stafford, T. (2013). The Discovery of Novel Actions Is Affected by Very Brief Reinforcement Delays and Reinforcement Modality. Journal of Motor Behavior, 45(4), 351-360.

New paper: “The path to learning: Action acquisition is impaired when visual reinforcement signals must first access cortex”

Using cunning experimental design we provide evidence which supports a new theory of how the brain learns new actions. Back in 2006, our professors Redgrave and Gurney proposed a new theory of how the brain learns new actions, centered around the subcortical brain area the basal ganglia and the function of the neurotransmitter dopamine. This was exciting for two reasons: it proposed a theory of what these parts of the brain might do, based on our understanding of the pathways involved and the computations they might support and because it was a theory that was in flat contradiction to the most popular theory of dopamine function, the reward prediction error hypothesis.

We set out to test this theory. We used a novel task to assess action-outcome learning, in which human subjects moved a joystick around until they could identify a target movement. We didn’t record the dopamine directly – a tall order for human subjects – but instead used our knowledge of what triggers dopamine to compare two learning conditions: one where dopamine would be triggered as normal, and one where we reasoned the dopamine signal would be weakened.

We did this by using two different kinds of reinforcement signals, either a simple luminance change (i.e. a white flash), or a specifically calibrated change in colour properties (visual psychophysics fans: a shift along the tritan line). The colour change signal is only visible to some of the cells in the eye, the s-cone photoreceptors. Importantly, for our purposes, this means that although the signal travels the cortical visual pathways it does not enter the subcortical visual pathway to the superior colliculus. And the colliculus is the main, if not only, route to trigger dopamine release in the basal ganglia.

So by manipulating the stimulus properties we can control the pathways the stimulus information travels. Either the reinforcement signal goes directly to the colliculus and so to the dopamine (luminance change condition), or the signal must travel through visual cortex first and then to the colliculus, ‘the long way round’, to get to the dopamine (s-cone condition).

The result is a validation for the action-learning hypothesis: when reinforcement signals are invisible to the colliculus learning new action-outcome associations is harder. We also did an important control experiment which showed that the impairment due to the s-cone signals couldn’t be matched by simple transport delay of the stimulus information; this suggests the s-cone signal is weaker, not just slower in terms of dopaminergic action. You can read the full thing here.

The results aren’t conclusive – no behavioural experiment which didn’t record dopamine directly could be – but we think it is a strong result. Popper said there are two kinds of results to be most interested in. One was the experiment which proved a theory wrong. The other – which we believe this is – is an experiment which confirms a bold hypothesis. There are no other theories which would suggest this experiment, and only the Redgrave and Gurney theory predicted the result we got before we got it. This makes it a startling validation for the theory and that is why we’re really proud of the paper.

This work was funded by our European project, im-clever, and all the difficult work was done by Martin Thirkettle, building on Tom Walton’s foundation.

Thirkettle, M., Walton, T., Shah, A., Gurney, K., Redgrave, P., & Stafford, T. (2013). The path to learning: Action acquisition is impaired when visual reinforcement signals must first access cortex. Behavioural Brain Research, 243, 267–272. doi:10.1016/j.bbr.2013.01.023

New paper: Memory Enhances the Mere Exposure Effect

This research used a novel testing strategy to overturn a long-standing claim in the literature. The mere exposure effect is the finding that simply experiencing something inclines you to like it. Obviously, back in the days of behaviourism this provided a marked contrast to reward-induced preferences. A landmark paper by Bob Zajonc showed that this effect could hold even if you weren’t aware of the original exposure. (Incidentally it was this paper, as far as I can tell, which reignited interest in subliminal perception after the topic had fallen into ‘hidden persuader’ ignominy).

For a long time, based partly on the influence of this seminal paper, it has been reported that explicit memory for stimuli will reduce the mere exposure effect. The logic is that explicit memory will allow people to use a deliberate discounting strategy (something along the lines of “I know I’ve seen that before, so maybe I just feel positive about it because I’ve seen it before”). This isn’t implausible, but does conflict with a large marketing literature which suggests that sustained engagement with marketing materials is more likely to lead to preference (and it is just such engagement with adverts which you would expect to be accompanied by explicit memory).

I put test stimuli in my PSY101 lectures, and then weeks later tested the students on their preferences for these stimuli and a matched group which they hadn’t seen. This allowed me to collect high number of participants for an experiment which had a high ecological validity (and still many elements of experimental control). Continue reading

Frontiers special issue on intrinstic motivation and open-ended development

Our special issue in Frontiers in Cognitive Science is now accepting submissions: Intrinsic motivations and open-ended development in animals, humans, and robots

This call stems from the EU FP7 project “IM-CLEVER”, programme of work that involved computer scientists, neuroscientists, psychologists and roboticist in developing robot controllers that can guide a robot to learn by exploring the world.

The special issue will gather together work related to this task. ‘Intrinsic motivations’ are those that guide exploration – things like curiosity, play or desire for mastery. The emphasis is on learning systems which are more than the simple stimulus-response or response-reward learning which has dominated learning theory for so long. ‘Open-ended development’ means learning that doesn’t have a goal or limit, but is instead designed to produce skills and abilities which can be build on to produce ever more complex skills and abilities. The call welcomes papers from experimental, theoretical and engineering perspectives. The full text of the call is here.

Brain network: social media and the cognitive scientist

This just published in Trends in Cognitive Sciences. Abstract:

Cognitive scientists are increasingly using online social media, such as blogging and Twitter, to gather information and disseminate opinion, while linking to primary articles and data. Because of this, internet tools are driving a change in the scientific process, where communication is characterised by rapid scientific discussion, wider access to specialist debates, and increased cross-disciplinary interaction. This article serves as an introduction to and overview of this transformation.

Reference: Stafford, T., & Bell, V. (2012). Brain network: social media and the cognitive scientist. Trends in Cognitive Sciences, 16(10), 489–490. doi:10.1016/j.tics.2012.08.001

I’m on Twitter as @tomstafford, btw

Fundamentals of learning: the exploration-exploitation trade-off

The exploration-exploitation trade-off is a fundamental dilemma whenever you learn about the world by trying things out. The dilemma is between choosing what you know and getting something close to what you expect (‘exploitation’) and choosing something you aren’t sure about and possibly learning more (‘exploration’). For example, suppose you are in a restaurant and you look at the menu:

  • Fish and Chips
  • Chole Poori
  • Paneer Uttappam
  • Khara Dosa

Assuming for the sake of example that you’re not very good with Sri Lankan food, you’ve now got a choice. You can ‘exploit’ – go with the fish and chips, which will probably be alright – or you can ‘explore’ – try something you haven’t had before and see what you get. Obviously which you decide to do will depend on many things: how hungry you are, how good the restaurant reviews are, how adventurous you are, how often you reckon you’ll be coming back ..etc. What’s important is that the study of the best way to make these kinds of choices – called reinforcement learning – has shown that optimal learning requires that you to sometimes make some bad choices. This means that sometimes you have to choose to avoid the action you think will be most rewarding, and take an action which you think will be less rewarding. The rationale is that these ‘sub-optimal’ actions are necessary for your long term benefit – you need to go off track sometimes to learn more about the environment. The exploration-exploitation dilemma is really a trade-off : enjoy more now vs learn more now and enjoy later. You can’t avoid it, all you can do is position yourself somewhere along the spectrum.

Because the trade-off is fundamental we would expect to be able to see it in all learning domains, not just restaurant food choices. In work just published, we’ve been using a new task to look at how actions are learnt. Using a joystick we asked people to explore the space of all possible movements, giving them a signal when they made a particular target movement. This task – which we’re pretty keen on – gives us a lens to look at the relation between how people explore the possible movements they can make and which particular movements they learn to rely on to generate predictable outcomes (which we call ‘actions’).

Using data gathered from this task, it is possible to see the exploitation-exploration trade-off in action. With each target people get 10 attempts to try to identify the right movement to make. Obviously some successful movements will be more efficient than others, because it is possible to hit the target after going all “round the houses” first, adding lots of extraneous movements and taking longer than needed. If you had a success like this you could repeat it exactly (‘exploit’), or try and cut out some of the extraneous movement and risk missing the target (‘explore’). Obviously this refinement of action through trial and error is of critical interest to anyone who cares about how we learn skilled movements.

I calculated an average performance score for the first 50% and second 50% of attempts (basically a measure of distance travelled before hitting the target – so lower scores mean better performance). I also calculated how variable these performance scores were in the first 50% and second 50%. Normally we would expect people who perform best in the first half of a test to perform best in the second half (depressingly people who start out ahead usually stay there!). But this analysis showed up something interesting: a strong correlation between variability in the first half and performance in the second half. You can see this in the graph

This shows that people who are most inconsistent when they start to learn perform best towards the end of learning. Usually inconsistency is a bad sign, so it is somewhat surprising that it predicts better performance later on. The obvious interpretation is in terms of the exploration-exploitation trade-off. The inconsistent people are trying out more things at the beginning, learning more about what works and what doesn’t. This provides them with the foundation to perform well later on. This pattern holds when comparing across individuals, but it also holds for comparing across trials (so for the same individual, their later performance is better for targets on which they are most inconsistent on early in learning).

You can read about this, and more, in our new paper, which is open-access over at PLoS One A novel task for the investigation of action acquisition.

New paper: A novel task for the investigation of action acquisition

Our new paper, A novel task for the investigation of action acquisition, has been published in PLoS One today. The paper describes a new paradigm we’ve been using to investigate how actions are learnt.

It’s a curious fact that although psychologists have thoroughly investigated how actions are valued (i.e. how you figure out how good or bad a thing is to do), and how actions are trained (i.e. shaped and refined over time), the same effort has not gone into investigating how a behaviour is first identified and stored as a part of our repertoire. We hope this task provides a useful tool for opening up this area for investigation.

As well as the basic description of the task, the paper also contains a section outlining how the form of learning the the task makes available for inspection is different from the forms of learning made available by other ‘action learning’ tasks (such as, for example, operant conditioning tasks). In addition to serving an under-investigated area of learning research, the task also has a number of practical benefits. It is scalable in difficulty, suitable for repeated measures designs (meaning you can do it again and again – it isn’t something you learn once and then can’t be tested on any more) as well being adaptable for different species (meaning you can test humans and non-human animals on the task).

The paper is based on work done as part of the EU robotics project I’m on (‘I’M-CLeVeR‘) and on Tom Walton’s PhD thesis, The Discovery of Novel Actions