Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 23 |
Since 2006 (last 20 years) | 25 |
Descriptor
Source
Grantee Submission | 25 |
Author
Matthew J. Madison | 4 |
Dascalu, Mihai | 3 |
McNamara, Danielle S. | 3 |
Nicula, Bogdan | 3 |
Albacete, Patricia | 2 |
Cai, Li | 2 |
Chounta, Irene-Angelica | 2 |
Danielle S. McNamara | 2 |
Jordan, Pamela | 2 |
Katz, Sandra | 2 |
Lientje Maas | 2 |
More ▼ |
Publication Type
Reports - Research | 23 |
Speeches/Meeting Papers | 10 |
Journal Articles | 6 |
Reports - Evaluative | 2 |
Education Level
Secondary Education | 5 |
Elementary Education | 4 |
High Schools | 4 |
Grade 7 | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Elementary Secondary Education | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
Higher Education | 1 |
More ▼ |
Audience
Location
California | 2 |
Florida | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tae Yeon Kwon; A. Corinne Huggins-Manley; Jonathan Templin; Mingying Zheng – Grantee Submission, 2023
In classroom assessments, examinees can often answer test items multiple times, resulting in sequential multiple-attempt data. Sequential diagnostic classification models (DCMs) have been developed for such data. As student learning processes may be aligned with a hierarchy of measured traits, this study aimed to develop a sequential hierarchical…
Descriptors: Classification, Accuracy, Student Evaluation, Sequential Approach
Matthew J. Madison; Seungwon Chung; Junok Kim; Laine P. Bradshaw – Grantee Submission, 2023
Recent developments have enabled the modeling of longitudinal assessment data in a diagnostic classification model (DCM) framework. These longitudinal DCMs were developed to provide measures of student growth on a discrete scale in the form of attribute mastery transitions, thereby supporting categorical and criterion-referenced interpretations of…
Descriptors: Models, Cognitive Measurement, Diagnostic Tests, Classification
Lientje Maas; Matthew J. Madison; Matthieu J. S. Brinkhuis – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models that yield probabilistic classifications of respondents according to a set of discrete latent variables. The current study examines the recently introduced one-parameter log-linear cognitive diagnosis model (1-PLCDM), which has increased interpretability compared with general DCMs due…
Descriptors: Clinical Diagnosis, Classification, Models, Psychometrics
Madeline A. Schellman; Matthew J. Madison – Grantee Submission, 2024
Diagnostic classification models (DCMs) have grown in popularity as stakeholders increasingly desire actionable information related to students' skill competencies. Longitudinal DCMs offer a psychometric framework for providing estimates of students' proficiency status transitions over time. For both cross-sectional and longitudinal DCMs, it is…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Matthew J. Madison; Stefanie Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics

W. Jake Thompson – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models that can be used to estimate the presence or absence of psychological traits, or proficiency on fine-grained skills. Critical to the use of any psychometric model in practice, including DCMs, is an evaluation of model fit. Traditionally, DCMs have been estimated with maximum…
Descriptors: Bayesian Statistics, Classification, Psychometrics, Goodness of Fit
Magooda, Ahmed; Elaraby, Mohamed; Litman, Diane – Grantee Submission, 2021
This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target…
Descriptors: Data Analysis, Synthesis, Documentation, Training
Cai, Zhiqiang; Siebert-Evenstone, Amanda; Eagan, Brendan; Shaffer, David Williamson – Grantee Submission, 2021
When text datasets are very large, manually coding line by line becomes impractical. As a result, researchers sometimes try to use machine learning algorithms to automatically code text data. One of the most popular algorithms is topic modeling. For a given text dataset, a topic model provides probability distributions of words for a set of…
Descriptors: Coding, Artificial Intelligence, Models, Probability
Lijin Zhang; Xueyang Li; Zhiyong Zhang – Grantee Submission, 2023
The thriving developer community has a significant impact on the widespread use of R software. To better understand this community, we conducted a study analyzing all R packages available on CRAN. We identified the most popular topics of R packages by text mining the package descriptions. Additionally, using network centrality measures, we…
Descriptors: Computer Software, Programming Languages, Data Analysis, Visual Aids
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using linguistic and semantic features within their responses.…
Descriptors: Natural Language Processing, Taxonomy, Responses, Semantics
Marilena Panaite; Mihai Dascalu; Amy Johnson; Renu Balyan; Jianmin Dai; Danielle S. McNamara; Stefan Trausan-Matu – Grantee Submission, 2018
Intelligent Tutoring Systems (ITSs) are aimed at promoting acquisition of knowledge and skills by providing relevant and appropriate feedback during students' practice activities. ITSs for literacy instruction commonly assess typed responses using Natural Language Processing (NLP) algorithms. One step in this direction often requires building a…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Algorithms, Decision Making
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
The ability to automatically assess the quality of paraphrases can be very useful for facilitating literacy skills and providing timely feedback to learners. Our aim is twofold: a) to automatically evaluate the quality of paraphrases across four dimensions: lexical similarity, syntactic similarity, semantic similarity and paraphrase quality, and…
Descriptors: Phrase Structure, Networks, Semantics, Feedback (Response)
Jones, Michael N. – Grantee Submission, 2018
Abstraction is a core principle of Distributional Semantic Models (DSMs) that learn semantic representations for words by applying dimensional reduction to statistical redundancies in language. Although the posited learning mechanisms vary widely, virtually all DSMs are prototype models in that they create a single abstract representation of a…
Descriptors: Abstract Reasoning, Semantics, Memory, Learning Processes
Bonifay, Wes; Cai, Li – Grantee Submission, 2017
Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. The information-theoretic principle of minimum description length provides a novel method of investigating complexity by…
Descriptors: Item Response Theory, Difficulty Level, Goodness of Fit, Factor Analysis
Previous Page | Next Page ยป
Pages: 1 | 2