Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 15 |
Descriptor
| Cognitive Ability | 18 |
| Models | 18 |
| Test Items | 18 |
| Item Response Theory | 8 |
| Psychometrics | 5 |
| Difficulty Level | 4 |
| Foreign Countries | 4 |
| Inferences | 4 |
| Probability | 4 |
| Problem Solving | 4 |
| Classification | 3 |
| More ▼ | |
Source
Author
| Gierl, Mark J. | 2 |
| Wang, Changjiang | 2 |
| Xu, Xueli | 2 |
| von Davier, Matthias | 2 |
| Asire, Semih | 1 |
| Baron, Simon | 1 |
| Bernard, David | 1 |
| Bingxue Zhang | 1 |
| Chengliang Chai | 1 |
| Cohen, Allan S. | 1 |
| Cor, M. Ken | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 16 |
| Reports - Research | 15 |
| Reports - Evaluative | 2 |
| Opinion Papers | 1 |
| Reports - Descriptive | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
| Grade 10 | 1 |
| Grade 4 | 1 |
| Grade 8 | 1 |
| High Schools | 1 |
| Intermediate Grades | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
| Policymakers | 1 |
| Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 3 |
| National Assessment of… | 2 |
| Florida Comprehensive… | 1 |
| Peabody Picture Vocabulary… | 1 |
| Preliminary Scholastic… | 1 |
What Works Clearinghouse Rating
Bingxue Zhang; Yang Shi; Yuxing Li; Chengliang Chai; Longfeng Hou – Interactive Learning Environments, 2023
The adaptive learning environment provides learning support that suits individual characteristics of students, and the student model of the adaptive learning environment is the key element to promote individualized learning. This paper provides a systematic overview of the existing student models, consequently showing that the Elo rating system…
Descriptors: Electronic Learning, Models, Students, Individualized Instruction
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Sünbül, Seçil Ömür; Asire, Semih – International Journal of Progressive Education, 2018
In this study it was aimed to evaluate the effects of various factors such as sample sizes, percentage of misfit items in the test and item quality (item discrimination) on item and model fit in case of misspecification of Q matrix. Data were generated in accordance with DINA model. Q matrix was specified for 4 attributes and 15 items. While data…
Descriptors: Clinical Diagnosis, Cognitive Ability, Problem Solving, Models
Storme, Martin; Myszkowski, Nils; Baron, Simon; Bernard, David – Journal of Intelligence, 2019
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in…
Descriptors: Intelligence Tests, Item Response Theory, Comparative Analysis, Test Reliability
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver – Large-scale Assessments in Education, 2017
Background: A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to…
Descriptors: Student Evaluation, International Assessment, Adults, Competence
Wang, Changjiang; Gierl, Mark J. – Journal of Educational Measurement, 2011
The purpose of this study is to apply the attribute hierarchy method (AHM) to a subset of SAT critical reading items and illustrate how the method can be used to promote cognitive diagnostic inferences. The AHM is a psychometric procedure for classifying examinees' test item responses into a set of attribute mastery patterns associated with…
Descriptors: Reading Comprehension, Test Items, Critical Reading, Protocol Analysis
Webb, Mi-young Lee; Cohen, Allan S.; Schwanenflugel, Paula J. – Educational and Psychological Measurement, 2008
This study investigated the use of latent class analysis for the detection of differences in item functioning on the Peabody Picture Vocabulary Test-Third Edition (PPVT-III). A two-class solution for a latent class model appeared to be defined in part by ability because Class 1 was lower in ability than Class 2 on both the PPVT-III and the…
Descriptors: Item Response Theory, Test Items, Test Format, Cognitive Ability
Oosterhof, Albert; Rohani, Faranak; Sanfilippo, Carol; Stillwell, Peggy; Hawkins, Karen – Online Submission, 2008
In assessment, the ability to construct test items that measure a targeted skill is fundamental to validity and alignment. The ability to do the reverse is also important: determining what skill an existing test item measures. This paper presents a model for classifying test items that builds on procedures developed by others, including Bloom…
Descriptors: Test Items, Classification, Models, Cognitive Ability
Lee, Yong-Won; Sawaki, Yasuyo – Language Assessment Quarterly, 2009
The present study investigated the functioning of three psychometric models for cognitive diagnosis--the general diagnostic model, the fusion model, and latent class analysis--when applied to large-scale English as a second language listening and reading comprehension assessments. Data used in this study were scored item responses and incidence…
Descriptors: Reading Comprehension, Field Tests, Identification, Classification
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Leighton, Jacqueline P.; Cui, Ying; Cor, M. Ken – Applied Measurement in Education, 2009
The objective of the present investigation was to compare the adequacy of two cognitive models for predicting examinee performance on a sample of algebra I and II items from the March 2005 administration of the SAT[TM]. The two models included one generated from verbal reports provided by 21 examinees as they solved the SAT[TM] items, and the…
Descriptors: Test Items, Inferences, Cognitive Ability, Prediction
Xu, Xueli; von Davier, Matthias – ETS Research Report Series, 2008
Xu and von Davier (2006) demonstrated the feasibility of using the general diagnostic model (GDM) to analyze National Assessment of Educational Progress (NAEP) proficiency data. Their work showed that the GDM analysis not only led to conclusions for gender and race groups similar to those published in the NAEP Report Card, but also allowed…
Descriptors: National Competency Tests, Models, Data Analysis, Reading Tests
Gierl, Mark J.; Wang, Changjiang; Zhou, Jiawen – Journal of Technology, Learning, and Assessment, 2008
The purpose of this study is to apply the attribute hierarchy method (AHM) to a sample of SAT algebra items administered in March 2005. The AHM is a psychometric method for classifying examinees' test item responses into a set of structured attribute patterns associated with different components from a cognitive model of task performance. An…
Descriptors: Test Items, Protocol Analysis, Psychometrics, Algebra
Xu, Xueli; von Davier, Matthias – ETS Research Report Series, 2008
Three strategies for linking two consecutive assessments are investigated and compared by analyzing reading data for the National Assessment of Educational Progress (NAEP) using the general diagnostic model. These strategies are compared in terms of marginal and joint expectations of skills, joint probabilities of skill patterns, and item…
Descriptors: National Competency Tests, Probability, Reading Achievement, Test Items
Samejima, Fumiko – 1999
The logistic positive exponent family (LPEF) of models has been proposed by F. Samejima (1998) for dichotomous responses. This family of models is characterized by point-asymmetric item characteristic curves (ICCs). This paper introduces the LPEF family, and discusses its usefulness in educational measurement and the implications of its use.…
Descriptors: Cognitive Ability, Educational Testing, Equations (Mathematics), Measurement Techniques
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
