Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 10 |
Descriptor
Computer Assisted Testing | 24 |
Scoring Formulas | 24 |
Test Reliability | 9 |
Adaptive Testing | 8 |
Higher Education | 6 |
Item Banks | 5 |
Achievement Tests | 4 |
Latent Trait Theory | 4 |
Scoring | 4 |
Test Construction | 4 |
Test Items | 4 |
More ▼ |
Source
Author
Weiss, David J. | 5 |
Abedi, Jamal | 1 |
Anderson, Richard Ivan | 1 |
Atkinson, George F. | 1 |
Attali, Yigal | 1 |
Bejar, Issac I. | 1 |
Ben-Simon, Anat | 1 |
Bennett, Randy Elliott | 1 |
Bruno, James | 1 |
Bruno, James E. | 1 |
Church, Austin T. | 1 |
More ▼ |
Publication Type
Reports - Research | 18 |
Journal Articles | 14 |
Reports - Descriptive | 3 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Guides - Non-Classroom | 1 |
Opinion Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Location
Czech Republic | 1 |
Sweden | 1 |
United Kingdom (Bristol) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Preliminary Scholastic… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Guskey, Thomas R.; Jung, Lee Ann – Educational Leadership, 2016
Many educators consider grades calculated from statistical algorithms more accurate, objective, and reliable than grades they calculate themselves. But in this research, the authors first asked teachers to use their professional judgment to choose a summary grade for hypothetical students. When the researchers compared the teachers' grade with the…
Descriptors: Grading, Computer Assisted Testing, Interrater Reliability, Grades (Scholastic)
Tarricone, Pina; Newhouse, C. Paul – Australian Educational Researcher, 2016
Traditional moderation of student assessments is often carried out with groups of teachers working face-to-face in a specified location making judgements concerning the quality of representations of achievement. This traditional model has relied little on modern information communications technologies and has been logistically challenging. We…
Descriptors: Visual Arts, Art Education, Art Materials, Alternative Assessment
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Walker, Philip; Gwynllyw, D. Rhys; Henderson, Karen L. – Teaching Mathematics and Its Applications, 2015
We demonstrate how the re-marker and reporter facility of the DEWIS e-Assessment system facilitates the capture, analysis and reporting of student errors using two case studies: logarithms and indices for first-year computing students at the University of the West of England, and Sturm-Liouville problems for second-year mathematics students at…
Descriptors: Computer Assisted Testing, Error Patterns, Case Studies, College Mathematics
Roscoe, Rod D.; Varner, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
Various computer tools have been developed to support educators' assessment of student writing, including automated essay scoring and automated writing evaluation systems. Research demonstrates that these systems exhibit relatively high scoring accuracy but uncertain instructional efficacy. Students' writing proficiency does not necessarily…
Descriptors: Writing Instruction, Intelligent Tutoring Systems, Computer Assisted Testing, Writing Evaluation
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David M. – ETS Research Report Series, 2008
This report presents the results of a research and development effort for SpeechRater? Version 1.0 (v1.0), an automated scoring system for the spontaneous speech of English language learners used operationally in the Test of English as a Foreign Language™ (TOEFL®) Practice Online assessment (TPO). The report includes a summary of the validity…
Descriptors: Speech, Scoring, Scoring Rubrics, Scoring Formulas
Stricker, Lawrence J.; Rock, Donald A. – ETS Research Report Series, 2008
This study assessed the invariance in the factor structure of the "Test of English as a Foreign Language"™ Internet-based test (TOEFL® iBT) across subgroups of test takers who differed in native language and exposure to the English language. The subgroups were defined by (a) Indo-European and Non-Indo-European language family, (b)…
Descriptors: Factor Structure, English (Second Language), Language Tests, Computer Assisted Testing

Koch, William R.; Dodd, Barbara G. – Applied Measurement in Education, 1989
Various aspects of the computerized adaptive testing (CAT) procedure for partial credit scoring were manipulated, focusing on the effects of the manipulations on operational characteristics of the CAT. The effects of item-pool size, item-pool information, and stepsizes used along the trait continuum were assessed. (TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Maximum Likelihood Statistics
Attali, Yigal – ETS Research Report Series, 2007
Because there is no commonly accepted view of what makes for good writing, automated essay scoring (AES) ideally should be able to accommodate different theoretical positions, certainly at the level of state standards but also perhaps among teachers at the classroom level. This paper presents a practical approach and an interactive computer…
Descriptors: Computer Assisted Testing, Automation, Essay Tests, Scoring
Anderson, Richard Ivan – 1980
Features of a probabilistic testing system that has been implemented on the "cerl" PLATO computer system are described. The key feature of the system is the manner in which an examinee responds to each test item; the examinee distributes probabilities among the alternatives of each item by positioning a small square on or within an…
Descriptors: Computer Assisted Testing, Data Collection, Feedback, Probability
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Lindblad, Torsten – 1981
The recent psycholinguistic-sociolinguistic trend in foreign language teaching indicates a shift of interest away from quantitative data towards qualitative information of different kinds. Validity and relevance are stressed and so new test formats are demanded, as well as new methods of dealing with student answers. Item analysis techniques used…
Descriptors: Communicative Competence (Languages), Computer Assisted Testing, Computer Programs, Higher Education
Weiss, David J. – 1973
This report describes the stratified adaptive (stradaptive) test as a strategy for tailoring an ability test to individual differences in testee ability; administration of the test is controlled by a time-shared computer system. The rationale of this method is described as it derives from Binet's strategy of ability test administration and…
Descriptors: Adaptive Testing, Branching, Computer Assisted Testing, Individual Testing
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron – Journal of Educational Computing Research, 2006
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
Descriptors: Preservice Teacher Education, Writing Evaluation, Case Studies, Formative Evaluation
Previous Page | Next Page »
Pages: 1 | 2