NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Duta, Mihaela; Plunkett, Kim – Child Development, 2023
We present a neural network model of referent identification in a visual world task. Inputs are visual representations of item pairs unfolding with sequences of phonemes identifying the target item. The model is trained to output the semantic representation of the target and to suppress the distractor. The training set uses a 200-word lexicon…
Descriptors: Networks, Models, Brain, Child Language
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shuo – Journal of Autism and Developmental Disorders, 2019
Prior studies have emphasized the contribution of aberrant amygdala structure and function in social aspects of autism. However, it remains largely unknown whether amygdala dysfunction directly impairs visual attention and exploration as has been observed in people with autism spectrum disorders (ASD). Here, gaze patterns were directly compared…
Descriptors: Autism, Pervasive Developmental Disorders, Brain Hemisphere Functions, Visual Perception
Peer reviewed Peer reviewed
Direct linkDirect link
Keehn, Brandon; Westerfield, Marissa; Townsend, Jeanne – Journal of Autism and Developmental Disorders, 2019
This study investigates how task-irrelevant auditory information is processed in children with autism spectrum disorder (ASD). Eighteen children with ASD and 19 age- and IQ-matched typically developing (TD) children were presented with semantically-congruent and incongruent picture-sound pairs, and in separate tasks were instructed to attend to…
Descriptors: Autism, Pervasive Developmental Disorders, Children, Visual Stimuli
Peer reviewed Peer reviewed
Direct linkDirect link
Hollingworth, Andrew – Journal of Experimental Psychology: Human Perception and Performance, 2012
Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…
Descriptors: Memory, Visual Perception, Attention, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Spotorno, Sara; Faure, Sylvane – Brain and Cognition, 2011
What accounts for the Right Hemisphere (RH) functional superiority in visual change detection? An original task which combines one-shot and divided visual field paradigms allowed us to direct change information initially to the RH or the Left Hemisphere (LH) by deleting, respectively, an object included in the left or right half of a scene…
Descriptors: Intervals, Semantics, Visual Perception, Brain Hemisphere Functions
Peer reviewed Peer reviewed
Direct linkDirect link
Telling, Anna L.; Meyer, Antje S.; Humphreys, Glyn W. – Brain and Cognition, 2010
When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we…
Descriptors: Semantics, Eye Movements, Young Adults, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Simola, Jaana; Holmqvist, Kenneth; Lindgren, Magnus – Brain and Language, 2009
Readers acquire information outside the current eye fixation. Previous research indicates that having only the fixated word available slows reading, but when the next word is visible, reading is almost as fast as when the whole line is seen. Parafoveal-on-foveal effects are interpreted to reflect that the characteristics of a parafoveal word can…
Descriptors: Semantics, Eye Movements, Visual Perception, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Vachon, Francois; Tremblay, Sebastien; Jones, Dylan M. – Journal of Experimental Psychology: Human Perception and Performance, 2007
When two visual targets, Target 1 (T1) and Target 2 (T2), are presented among a rapid sequence of distractors, processing of T1 produces an attentional blink. Typically, processing of T2 is markedly impaired, except when T1 and T2 are adjacent (Lag 1 sparing). However, if a shift of task set--a change in task requirements from T1 to T2--occurs,…
Descriptors: Semantics, Visual Stimuli, Cognitive Processes, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Griffin, Zenzi M.; Oppenheimer, Daniel M. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2006
When describing scenes, speakers gaze at objects while preparing their names (Z. M. Griffin & K. Bock, 2000). In this study, the authors investigated whether gazes to referents occurred in the absence of a correspondence between visual features and word meaning. Speakers gazed significantly longer at objects before intentionally labeling them…
Descriptors: Semantics, Attention, Visual Perception, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Dux, Paul E.; Harris, Irina M. – Cognition, 2007
Do the viewpoint costs incurred when naming rotated familiar objects arise during initial identification or during consolidation? To answer this question we employed an attentional blink (AB) task where two target objects appeared amongst a rapid stream of distractor objects. Our assumption was that while both targets and distractors undergo…
Descriptors: Semantics, Identification, Eye Movements, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Calvo, Manuel G.; Nummenmaa, Lauri – Journal of Experimental Psychology: General, 2007
Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective…
Descriptors: Semantics, Attention, Emotional Response, Affective Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Yee, Eiling; Sedivy, Julie C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2006
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…
Descriptors: Eye Movements, Word Recognition, Semantics, Language Research