NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wolkowitz, Amanda A.; Wright, Keith D. – Journal of Educational Measurement, 2019
This article explores the amount of equating error at a passing score when equating scores from exams with small samples sizes. This article focuses on equating using classical test theory methods of Tucker linear, Levine linear, frequency estimation, and chained equipercentile equating. Both simulation and real data studies were used in the…
Descriptors: Error Patterns, Sample Size, Test Theory, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias – Quality Assurance in Education: An International Perspective, 2018
Purpose: Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent…
Descriptors: Statistical Analysis, Surveys, International Assessment, Error Patterns
Peer reviewed Peer reviewed
Cudeck, Robert – Journal of Educational Measurement, 1980
Methods for evaluating the consistency of responses to test items were compared. When a researcher is unwilling to make the assumptions of classical test theory, has only a small number of items, or is in a tailored testing context, Cliff's dominance indices may be useful. (Author/CTM)
Descriptors: Error Patterns, Item Analysis, Test Items, Test Reliability
Kearns, Jack – 1974
Empirical Bayes point estimates of true score may be obtained if the distribution of observed score for a fixed examinee is approximated in one of several ways by a well-known compound binomial model. The Bayes estimates of true score may be expressed in terms of the observed score distribution and the distribution of a hypothetical binomial test.…
Descriptors: Career Development, Error Patterns, Expectation, Mathematical Models
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M. – 1982
Several extended caution indices (ECIs) have been introduced earlier as a link between two distinctly different approaches: one based on standard statistics and the other, a model-based approach, utilizing item response theory (IRT). Expected values and variance of some ECIs are derived and their statistical properties are compared and discussed.…
Descriptors: Error Patterns, Higher Education, Latent Trait Theory, Models
Peer reviewed Peer reviewed
Hamilton, Lawrence C. – Journal of Educational Measurement, 1981
Errors in self-reports of three academic performance measures are analyzed. Empirical errors are shown to depart radically from both no-error and random-error assumptions. Self-reports by females depart farther from the no-error and random-error models for all three performance measures. (Author/BW)
Descriptors: Academic Achievement, Error Patterns, Grade Point Average, Models
Peer reviewed Peer reviewed
Movshovitz-Hadar, Nitsa; And Others – Journal for Research in Mathematics Education, 1987
A content-oriented analysis of written solutions to test items in Israeli high school graduation examinations in mathematics yielded six error categories: misused data; misinterpreted language; logically invalid inference; distorted theorem or definition; unverified solution; and technical error. (Authors/MNS)
Descriptors: Educational Research, Error Patterns, Mathematics Instruction, Research Reports
Jones, Douglas H. – 1985
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Descriptors: Error Patterns, Functions (Mathematics), Goodness of Fit, Item Analysis
Peer reviewed Peer reviewed
O'Brien, Michael L. – Studies in Educational Evaluation, 1986
A test score can be used for individual instructional diagnosis after determining whether: (1) difficulty of the test items was consistent with the complexity of the content measured; (2) items measuring the same underlying process were about equally difficult; and (3) partial credit scoring would increase the reliability of the diagnosis. (LMO)
Descriptors: Behavioral Objectives, Difficulty Level, Educational Diagnosis, Error Patterns
Webb, Noreen; Herman, Joan – 1984
This paper describes the development of a language arts test to assess the consistency of student response patterns and the feasibility of using the test to diagnose students' misconceptions. The studies were part of a project to develop computerized adaptive testing for the language arts with software to diagnose student errors. The…
Descriptors: Adaptive Testing, Computer Assisted Testing, Diagnostic Tests, Error Patterns
Peer reviewed Peer reviewed
van Weeren, J.; Theunissen, T. J. J. M. – Language Learning, 1987
A systematic and explicit approach to evaluation of pronunciation is proposed. Generalizability theory was applied in order to comprise all relevant factors in one psychomotor model. French and German pronunciation tests (in Appendix) were devised and evaluated. Common pronunciation problems for native Dutch speakers were incorporated. (Author/LMO)
Descriptors: Communicative Competence (Languages), Dutch, Error Analysis (Language), Error Patterns