NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)3
Since 2006 (last 20 years)9
Location
Indonesia1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Alqarni, Abdulelah Mohammed – Journal on Educational Psychology, 2019
This study compares the psychometric properties of reliability in Classical Test Theory (CTT), item information in Item Response Theory (IRT), and validation from the perspective of modern validity theory for the purpose of bringing attention to potential issues that might exist when testing organizations use both test theories in the same testing…
Descriptors: Test Theory, Item Response Theory, Test Construction, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yoshioka, Sérgio R. I.; Ishitani, Lucila – Informatics in Education, 2018
Computerized Adaptive Testing (CAT) is now widely used. However, inserting new items into the question bank of a CAT requires a great effort that makes impractical the wide application of CAT in classroom teaching. One solution would be to use the tacit knowledge of the teachers or experts for a pre-classification and calibrate during the…
Descriptors: Student Motivation, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Longabach, Tanya; Peyton, Vicki – Language Testing, 2018
K-12 English language proficiency tests that assess multiple content domains (e.g., listening, speaking, reading, writing) often have subsections based on these content domains; scores assigned to these subsections are commonly known as subscores. Testing programs face increasing customer demands for the reporting of subscores in addition to the…
Descriptors: Comparative Analysis, Test Reliability, Second Language Learning, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Retnawati, Heri – Turkish Online Journal of Educational Technology - TOJET, 2015
This study aimed to compare the accuracy of the test scores as results of Test of English Proficiency (TOEP) based on paper and pencil test (PPT) versus computer-based test (CBT). Using the participants' responses to the PPT documented from 2008-2010 and data of CBT TOEP documented in 2013-2014 on the sets of 1A, 2A, and 3A for the Listening and…
Descriptors: Scores, Accuracy, Computer Assisted Testing, English (Second Language)
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Applied Measurement in Education, 2011
Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over…
Descriptors: Generalizability Theory, Test Theory, Test Reliability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie – Measurement and Evaluation in Counseling and Development, 2013
Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…
Descriptors: Item Response Theory, Test Theory, Measures (Individuals), Racial Identification
Herman, Geoffrey Lindsay – ProQuest LLC, 2011
Instructors in electrical and computer engineering and in computer science have developed innovative methods to teach digital logic circuits. These methods attempt to increase student learning, satisfaction, and retention. Although there are readily accessible and accepted means for measuring satisfaction and retention, there are no widely…
Descriptors: Grounded Theory, Delphi Technique, Concept Formation, Misconceptions
Eason, Sandra H. – 1989
Generalizability theory provides a technique for accurately estimating the reliability of measurements. The power of this theory is based on the simultaneous analysis of multiple sources of error variances. Equally important, generalizability theory considers relationships among the sources of measurement error. Just as multivariate inferential…
Descriptors: Comparative Analysis, Generalizability Theory, Test Reliability, Test Theory
Downing, Steven M.; Mehrens, William A. – 1978
Four criterion-referenced reliability coefficicents were compared to the Kuder-Richardson estimates and to each other. The Kuder-Richardson formulas 20 and 21, the Livingston, the Subkoviak and two Huynh coefficients were computed for a random sample of 33 criterion-referenced tests. The Subkoviak coefficient yielded the highest mean value;…
Descriptors: Career Development, Comparative Analysis, Criterion Referenced Tests, Factor Analysis
Peer reviewed Peer reviewed
Williams, Richard H.; Zimmerman, Donald W. – Journal of Experimental Education, 1982
The reliability of simple difference scores is greater than, less than, or equal to that of residualized difference scores, depending on whether the correlation between pretest and posttest scores is greater than, less than, or equal to the ratio of the standard deviations of pretest and posttest scores. (Author)
Descriptors: Achievement Gains, Comparative Analysis, Correlation, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
PDF pending restoration PDF pending restoration
Lovett, Hubert T. – 1975
The reliability of a criterion referenced test was defined as a measure of the degree to which the test discriminates between an individual's level of performance and a predetermined criterion level. The variances of observed and true scores were defined as the squared deviation of the score from the criterion. Based on these definitions and the…
Descriptors: Career Development, Comparative Analysis, Criterion Referenced Tests, Mathematical Models
Goulden, Nancy Rost – 1989
Since speech communication evaluators are beginning to adapt the analytic and holistic instruments and methods used for rating written products to oral products and performance, this research review investigated: (1) what the labels "analytic" and "holistic" mean; (2) the theoretical bases of the two scoring approaches; and (3)…
Descriptors: Comparative Analysis, Higher Education, Holistic Evaluation, Rating Scales
Sammon, Susan F. – 1988
A study investigated whether a positive correlation existed between scores obtained by incoming freshman on the recently developed Degrees of Reading Power Test (DRP) and the required Reading Comprehension subtest of the New Jersey College Basic Skills Placement Test (NJCBSPT). The subjects, 217 William Paterson College freshman enrolled in a…
Descriptors: Comparative Analysis, Comparative Testing, Correlation, Educational Testing
Previous Page | Next Page »
Pages: 1  |  2