NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Italy1
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations1
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Corlatescu, Dragos-Georgian; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader's mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of…
Descriptors: Reading Comprehension, Memory, Inferences, Syntax
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using linguistic and semantic features within their responses.…
Descriptors: Natural Language Processing, Taxonomy, Responses, Semantics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Keezhatta, Muhammed Salim – Arab World English Journal, 2019
Natural Language Processing (NLP) platforms have recently reported a higher adoption rate of Artificial Intelligence (AI) applications. The purpose of this research is to examine the relationship between NLP and AI in the application of linguistic tasks related to morphology, parsing, and semantics. To achieve this objective, a theoretical…
Descriptors: Models, Correlation, Natural Language Processing, Artificial Intelligence
Sharp, Rebecca Reynolds – ProQuest LLC, 2017
We address the challenging task of "computational natural language inference," by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question…
Descriptors: Computation, Natural Language Processing, Inferences, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Ouyang, Long; Boroditsky, Lera; Frank, Michael C. – Cognitive Science, 2017
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of…
Descriptors: Semiotics, Computational Linguistics, Syntax, Semantics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bejar, Isaac I.; Deane, Paul D.; Flor, Michael; Chen, Jing – ETS Research Report Series, 2017
The report is the first systematic evaluation of the sentence equivalence item type introduced by the "GRE"® revised General Test. We adopt a validity framework to guide our investigation based on Kane's approach to validation whereby a hierarchy of inferences that should be documented to support score meaning and interpretation is…
Descriptors: College Entrance Examinations, Graduate Study, Generalization, Inferences
Tu, Yuancheng – ProQuest LLC, 2012
The fundamental problem faced by automatic text understanding in Natural Language Processing (NLP) is to identify semantically related pieces of text and integrate them together to compute the meaning of the whole text. However, the principle of compositionality runs into trouble very quickly when real language is examined with its frequent…
Descriptors: English, Verbs, Computational Linguistics, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
McNorgan, Chris; Reid, Jackie; McRae, Ken – Cognition, 2011
Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within-…
Descriptors: Semantics, Inferences, Experiments, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Nelson, Robert – Modern Language Journal, 2012
A number of asymmetries in lexical memory emerge when monolinguals and early bilinguals are compared to (relatively) late second language (L2) learners. Their study promises to provide insight into the internal processes that both support and ultimately limit L2 learner achievement. Generally, theory building in L2 and bilingual lexical memory has…
Descriptors: Memory, Brain Hemisphere Functions, Bilingualism, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Yi Ting; Snedeker, Jesse – Developmental Psychology, 2009
Recent research on children's inferencing has found that although adults typically adopt the pragmatic interpretation of "some" (implying "not all"), 5- to 9-year-olds often prefer the semantic interpretation of the quantifier (meaning possibly "all"). Do these failures reflect a breakdown of pragmatic competence or the metalinguistic demands of…
Descriptors: Young Children, Inferences, Eye Movements, Models
Boyd-Graber, Jordan – ProQuest LLC, 2010
Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…
Descriptors: Language Patterns, Semantics, Linguistics, Multilingualism
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Fei; Tenenbaum, Joshua B. – Developmental Science, 2007
We report a new study testing our proposal that word learning may be best explained as an approximate form of Bayesian inference (Xu & Tenenbaum, in press). Children are capable of learning word meanings across a wide range of communicative contexts. In different contexts, learners may encounter different sampling processes generating the examples…
Descriptors: Semantics, Bayesian Statistics, Sampling, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
McClelland, James L.; Thompson, Richard M. – Developmental Science, 2007
A connectionist model of causal attribution is presented, emphasizing the use of domain-general principles of processing and learning previously employed in models of semantic cognition. The model categorizes objects dependent upon their observed 'causal properties' and is capable of making several types of inferences that 4-year-old children have…
Descriptors: Semantics, Probability, Inferences, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wolff, Phillip; Song, Grace – Cognitive Psychology, 2003
This research examines the relationship between the concept of CAUSE as it is characterized in psychological models of causation and the meaning of causal verbs, such as the verb "cause" itself. According to focal set models of causation ([Cheng (1997]; [Cheng and Novick (1991 and Cheng and Novick (1992]), the concept of CAUSE should be more…
Descriptors: Semantics, Verbs, Prediction, Experiments
Read, Walter; And Others – 1988
A discussion of the application of artificial intelligence to natural language processing looks at several problems in language comprehension, involving semantic ambiguity, anaphoric reference, and metonymy. Examples of these problems are cited, and the importance of the computational approach in analyzing them is explained. The approach applies…
Descriptors: Ambiguity, Artificial Intelligence, Comprehension, Epistemology
Previous Page | Next Page »
Pages: 1  |  2