NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 2 results Save | Export
Peer reviewed Peer reviewed
Sami Baral; Li Lucy; Ryan Knight; Alice Ng; Luca Soldaini; Neil T. Heffernan; Kyle Lo – Grantee Submission, 2024
In real-world settings, vision language models (VLMs) should robustly handle naturalistic, noisy visual content as well as domain-specific language and concepts. For example, K-12 educators using digital learning platforms may need to examine and provide feedback across many images of students' math work. To assess the potential of VLMs to support…
Descriptors: Visual Learning, Visual Perception, Natural Language Processing, Freehand Drawing
Konishi, Haruka; Brezack, Natalie; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy – Grantee Submission, 2019
Infants appear to progress from universal to language-specific event perception. In Japanese, two different verbs describe a person crossing a "bounded ground" (e.g., street) versus an unbounded ground (e.g., field) while in English, the same verb -- "crossing" -- describes both events. Interestingly, Japanese "and"…
Descriptors: Infants, Cognitive Processes, Verbs, Japanese