NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Supriyati, Yetti; Iriyadi, Deni; Falani, Ilham – Journal of Technology and Science Education, 2021
This study aims to develop a score equating application for computer-based school exams using parallel test kits with 25% anchor items. The items are arranged according to HOTS (High Order Thinking Skill) category, and use a scientific approach according to the physics lessons characteristics. Therefore, the questions were made using stimulus,…
Descriptors: Physics, Science Instruction, Teaching Methods, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Ree, Malcom James; Jensen, Harald E. – 1980
By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…
Descriptors: Computer Assisted Testing, Equated Scores, Error of Measurement, Item Analysis
Olsen, James B.; And Others – 1986
Student achievement test scores were compared and equated, using three different testing methods: paper-administered, computer-administered, and computerized adaptive testing. The tests were developed from third and sixth grade mathematics item banks of the California Assessment Program. The paper and the computer-administered tests were identical…
Descriptors: Achievement Tests, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Cohen, Allan S., Comp. – 1979
This partially annotated bibliography of journal articles, dissertations, convention papers, research reports, and a few books and unpublished manuscripts provides a comprehensive coverage of work on latent trait theory and practice. Documents are arranged alphabetically by author. The period covered ranges from the early 1950's to the present.…
Descriptors: Attitude Measures, Career Development, Computer Assisted Testing, Computer Programs