NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Sami Baral; Li Lucy; Ryan Knight; Alice Ng; Luca Soldaini; Neil T. Heffernan; Kyle Lo – Grantee Submission, 2024
In real-world settings, vision language models (VLMs) should robustly handle naturalistic, noisy visual content as well as domain-specific language and concepts. For example, K-12 educators using digital learning platforms may need to examine and provide feedback across many images of students' math work. To assess the potential of VLMs to support…
Descriptors: Visual Learning, Visual Perception, Natural Language Processing, Freehand Drawing
Peer reviewed Peer reviewed
Direct linkDirect link
Courbois, Yanick; Coello, Yann; Bouchart, Isabelle – Journal of Intellectual and Developmental Disability, 2004
Four visual imagery tasks were presented to three groups of adolescents with or without spastic diplegic cerebral palsy. The first group was composed of six adolescents with cerebral palsy who had associated visual-perceptual deficits (CP-PD), the second group was composed of five adolescents with cerebral palsy and no associated visual-perceptual…
Descriptors: Imagery, Adolescents, Cerebral Palsy, Visual Stimuli
Peer reviewed Peer reviewed
Quinn, Paul C. – Psychological Record, 2005
Vidic and Haaf (2004) questioned the idea that infants use head information to categorize cats as distinct from dogs (Quinn & Eimas, 1996) and argued instead that the torso region is important. However, only null results were observed in the critical test comparisons between modified and unmodified stimuli. In addition, a priori preferences for…
Descriptors: Visual Stimuli, Infants, Classification, Infant Behavior