Publication Date
| In 2026 | 0 |
| Since 2025 | 225 |
| Since 2022 (last 5 years) | 1358 |
| Since 2017 (last 10 years) | 2816 |
| Since 2007 (last 20 years) | 4806 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 40 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 169 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 94 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 71 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
Peer reviewedHarper, R. – Journal of Computer Assisted Learning, 2003
Discusses multiple choice questions and presents a statistical approach to post-test correction for guessing that can be used in spreadsheets to automate the correction and generate a grade. Topics include the relationship between the learning objectives and multiple-choice assessments; and guessing correction by negative marking. (LRW)
Descriptors: Behavioral Objectives, Computer Assisted Testing, Grades (Scholastic), Guessing (Tests)
Peer reviewedChen, Ssu-Kuang; And Others – Educational and Psychological Measurement, 1997
A simulation study explored the effect of population distribution on maximum likelihood estimation (MLE) and expected a posteriori (EAP) estimation in computerized adaptive testing based on the rating scale model of D. Andrich (1978). The choice between EAP and MLE for particular situations is discussed. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedSpray, Judith A.; Reckase, Mark D. – Journal of Educational and Behavioral Statistics, 1996
Two procedures for classifying examinees into categories, one based on the sequential probability ratio test (SPRT) and the other on sequential Bayes methodology, were compared to determine which required fewer items for classification. Results showed that the SPRT procedure requires fewer items to achieve the same accuracy level. (SLD)
Descriptors: Ability, Bayesian Statistics, Classification, Comparative Analysis
Peer reviewedEignor, Daniel R. – Journal of Educational Measurement, 1997
The authors of the "Guidelines," a task force of eight, intend to present an organized list of features to be considered in reporting or evaluating computerized-adaptive assessments. Apart from a few weaknesses, the book is a useful and complete document that will be very helpful to test developers. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Evaluation Methods, Guidelines
Peer reviewedHolopainen, Leena; Ahonen, Timo; Lyytinen, Heikki – Scandinavian Journal of Educational Research, 2002
Developed a computer-based assessment of the use of beginning and end analogies based on clue syllables of five different syllable structures to examine the role of analogy in an orthographically regular language, Finnish. Results for 47 children suggest that, unlike the effect seen in English, reading in Finnish is based on single phoneme/letter…
Descriptors: Beginning Reading, Computer Assisted Testing, Elementary School Students, Finnish
Feld, Jason K.; Bergan, Kathryn S. – Child Care Information Exchange, 2002
Discusses issues for early childhood education administrators selecting a new assessment system. Considers features of screening and developmental assessment tools to look for and compares formats, strengths, and limitations for three types of assessment systems: paper-based, stand-alone software, and web-based assessment that allows for…
Descriptors: Computer Assisted Testing, Computer Uses in Education, Early Childhood Education, Educational Technology
Peer reviewedGreen, Donald Ross; And Others – Applied Measurement in Education, 1989
Potential benefits of using item response theory in test construction are evaluated using the experience and evidence accumulated during nine years of using a three-parameter model in the development of major achievement batteries. Topics addressed include error of measurement, test equating, item bias, and item difficulty. (TJH)
Descriptors: Achievement Tests, Computer Assisted Testing, Difficulty Level, Equated Scores
Peer reviewedWard, Thomas J., Jr.; And Others – Journal of Educational Computing Research, 1989
Discussion of computer-assisted testing focuses on a study of college students that investigated whether a computerized test which incorporated traditional test taking interfaces had any effect on students' performance, anxiety level, or attitudes toward the computer. Results indicate no difference in performance but a significant difference in…
Descriptors: Academic Achievement, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewedKumar, David D.; And Others – Educational Technology Research and Development, 1994
Explores the emerging interface between computer technology and cognitive psychology for performance assessment in science education. Discussion includes interface theories and interface technologies and prototype projects for building an alternative assessment technology. (50 references) (KRN)
Descriptors: Cognitive Psychology, Computer Assisted Testing, Computers, Educational Research
Peer reviewedKoch, William R.; Dodd, Barbara G. – Educational and Psychological Measurement, 1995
Basic procedures for performing computerized adaptive testing based on the successive intervals (SI) Rasch model were evaluated. The SI model was applied to simulated and real attitude data sets. Item pools as small as 30 items performed well, and the model appeared practical for Likert-type data. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewedMetz, Dale Evan; And Others – Journal of Communication Disorders, 1992
A preliminary scheme for estimating the speech intelligibility of hearing-impaired speakers from acoustic parameters, using a computerized artificial neural network to process mathematically the acoustic input variables, is outlined. Tests with 60 hearing-impaired speakers found the scheme to be highly accurate in identifying speakers separated by…
Descriptors: Acoustics, Computer Assisted Testing, Computer Oriented Programs, Estimation (Mathematics)
Peer reviewedStone, Clement A. – Educational Measurement: Issues and Practice, 1992
TESTAT is a supplementary module for the popular SYSTAT statistical package for the personal computer. The program performs test analyses based on classical test theory and item response theory. Limitations and advantages are discussed. (SLD)
Descriptors: Computer Assisted Testing, Computer Software Evaluation, Error of Measurement, Item Response Theory
Peer reviewedArmstrong, Ronald D.; And Others – Psychometrika, 1992
A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)
Descriptors: Algorithms, Arithmetic, Computer Assisted Testing, Equations (Mathematics)
Hicken, Samuel – Collegiate Microcomputer, 1993
Describes a pilot study that examined issues encountered when computers were used for a master's degree comprehensive examination. Computer-based testing is discussed; procedures used for administering the exam are detailed; pre- and posttests that examined student attitudes are described; and recommendations for using computers in comprehensive…
Descriptors: Computer Assisted Testing, Higher Education, Masters Degrees, Pilot Projects
Peer reviewedJamieson, Joan; And Others – System, 1993
The value of open-ended responses for computer-assisted-language-learning lessons and language tests is asserted. Results from a study in which students' notes and recall protocols of computerized reading passages were scored by both people and computers indicate that computer programs can score reliably with people, and in less time. (45…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Language Tests, Reading Comprehension


