NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Walter M. Stroup; Anthony Petrosino; Corey Brady; Karen Duseau – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
Tests of statistical significance often play a decisive role in establishing the empirical warrant of evidence-based research in education. The results from pattern-based assessment items, as introduced in this paper, are categorical and multimodal and do not immediately support the use of measures of central tendency as typically related to…
Descriptors: Statistical Significance, Comparative Analysis, Research Methodology, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayanwale, Musa Adekunle; Isaac-Oloniyo, Flourish O.; Abayomi, Funmilayo R. – International Journal of Evaluation and Research in Education, 2020
This study investigated dimensionality of Binary Response Items through a non-parametric technique of Item Response Theory measurement framework. The study used causal comparative research type of nonexperimental design. The sample consisted of 5,076 public senior secondary school examinees (SSS3) between the age of 14-16 years from 45 schools,…
Descriptors: Test Items, Item Response Theory, Bayesian Statistics, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Delafontaine, Jolien; Chen, Changsheng; Park, Jung Yeon; Van den Noortgate, Wim – Large-scale Assessments in Education, 2022
In cognitive diagnosis assessment (CDA), the impact of misspecified item-attribute relations (or "Q-matrix") designed by subject-matter experts has been a great challenge to real-world applications. This study examined parameter estimation of the CDA with the expert-designed Q-matrix and two refined Q-matrices for international…
Descriptors: Q Methodology, Matrices, Cognitive Measurement, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Arenson, Ethan A.; Karabatsos, George – Grantee Submission, 2017
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Descriptors: Bayesian Statistics, Item Response Theory, Nonparametric Statistics, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sözen, Merve; Bolat, Mualla – Journal of Education and Learning, 2016
The purpose of this study is to develop an achievement test which includes the basic concepts about the subject of sound and its properties in middle school science lessons and which at the same time aims to reveal the alternative concepts that the students already have. During the process of the development of the test, studies in the field and…
Descriptors: Achievement Tests, Science Education, Acoustics, Test Construction
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Measurement in Education, 1996
Several existing group-based statistics to detect improbable item score patterns are discussed, along with the cut scores proposed in the literature to classify an item score pattern as aberrant. A simulation study and an empirical study are used to compare the statistics and their use and to investigate the practical use of cut scores. (SLD)
Descriptors: Achievement Tests, Classification, Cutting Scores, Identification
Meijer, Rob R. – 1994
In person-fit analysis, the object is to investigate whether an item score pattern is improbable given the item score patterns of the other persons in the group or given what is expected on the basis of a test model. In this study, several existing group-based statistics to detect such improbable score patterns were investigated, along with the…
Descriptors: Achievement Tests, Classification, College Students, Cutting Scores
Peer reviewed Peer reviewed
Samejima, Fumiko – Applied Psychological Measurement, 1994
The Level-11 vocabulary subtest of the Iowa Tests of Basic Skills was analyzed using a two-stage latent trait approach and data set of 2,356 examinees, approximately 11 years of age. It is concluded that the nonparametric approach leads to efficient estimation of the latent trait. (SLD)
Descriptors: Achievement Tests, Distractors (Tests), Elementary Education, Elementary School Students