Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task

Article


Pagnotta, M., Laland, K. N. and Coco, M. 2020. Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task. Cognition. 201 (Art. 104314).
AuthorsPagnotta, M., Laland, K. N. and Coco, M.
Abstract

Observational learning is a form of social learning in which a demonstrator performs a target task in the company of an observer, who may as a consequence learn something about it. In this study, we approach social learning in terms of the dynamics of coordination rather than the more common perspective of transmission of information. We hypothesised that observers must continuously adjust their visual attention relative to the demonstrator's time-evolving behaviour to benefit from it. We eye-tracked observers repeatedly watching videos showing a demonstrator solving one of three manipulative puzzles before attempting at the task. The presence of the demonstrator's face and the availability of his verbal instruction in the videos were manipulated. We then used recurrence quantification analysis to measure the dynamics of coordination between the overt attention of the observers and the demonstrator's manipulative actions. Bayesian hierarchical logistic regression was applied to examine (1) whether the observers' performance was predicted by such indexes of coordination, (2) how performance changed as they accumulated experience, and (3) if the availability of speech and intentional gaze of the demonstrator mediated it. Results showed that learners better able to coordinate their eye movements with the manipulative actions of the demonstrator had an increasingly higher probability of success in solving the task. The availability of speech was beneficial to learning, whereas the presence of the demonstrator's face was not. We argue that focusing on the dynamics of coordination between individuals may greatly improve understanding of the cognitive processes underlying social learning.

JournalCognition
Journal citation201 (Art. 104314)
ISSN0010-0277
Year2020
PublisherElsevier
Accepted author manuscript
License
File Access Level
Anyone
Supplemental file
File Access Level
Anyone
Digital Object Identifier (DOI)doi:10.1016/j.cognition.2020.104314
Web address (URL)https://doi.org/10.1016/j.cognition.2020.104314
Publication dates
Online23 May 2020
Publication process dates
Accepted23 Apr 2020
Deposited16 Jul 2020
FunderJohn Templeton Foundation
Leverhulme Trust
Fundação para a Ciência e Tecnologia
Underpinning dataSupplementary material for 'Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task'
Copyright holder© 2020 Elsevier
Permalink -

https://repository.uel.ac.uk/item/88391

Download files

Supplemental file
1-s2.0-S0010027720301335-mmc1.docx
File access level: Anyone

Restricted files

Accepted author manuscript

  • 17
    total views
  • 9
    total downloads
  • 4
    views this month
  • 1
    downloads this month

Export as

Related outputs

Supplementary material for 'Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task'
Pagnotta, Murillo, Laland, Kevin, N. and Coco, M. 2020. Supplementary material for 'Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task'.
Age-related differences during visual search: the role of contextual expectations and cognitive control mechanisms
Borges, M. T., Fernandes, E. G. and Coco, M. 2019. Age-related differences during visual search: the role of contextual expectations and cognitive control mechanisms. Aging, Neuropsychology and Cognition. 27 (4), pp. 489-516.
Extra-foveal Processing of Object Semantics Guides Early Overt Attention During Visual Search
Cimminella, F., Coco, M. and Della Sala, S. 2019. Extra-foveal Processing of Object Semantics Guides Early Overt Attention During Visual Search. Attention, Perception, & Psychophysics. 82, p. 655–670.
Fixation-related Brain Potentials during Semantic Integration of Object–Scene Information
Coco, M., Nuthmann, A. and Dimigen, O. 2019. Fixation-related Brain Potentials during Semantic Integration of Object–Scene Information. Journal of Cognitive Neuroscience. 32 (4), pp. 571-589.