NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Bejar, Isaac I. – Assessment in Education: Principles, Policy & Practice, 2011
Automated scoring of constructed responses is already operational in several testing programmes. However, as the methodology matures and the demand for the utilisation of constructed responses increases, the volume of automated scoring is likely to increase at a fast pace. Quality assurance and control of the scoring process will likely be more…
Descriptors: Evidence, Quality Control, Scoring, Quality Assurance
Green, Bert F. – New Directions for Testing and Measurement, 1983
Computerized adaptive testing allows us to create a unique personalized test that matches the ability and knowledge of the test taker. (Author)
Descriptors: Adaptive Testing, Computer Assisted Testing, Individual Needs, Individual Testing
Bejar, Isaac I. – 1996
Generative response modeling is an approach to test development and response modeling that calls for the creation of items in such a way that the parameters of the items on some response model can be anticipated through knowledge of the psychological processes and knowledge required to respond to the item. That is, the computer would not merely…
Descriptors: Ability, Computer Assisted Testing, Cost Effectiveness, Estimation (Mathematics)
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)
Stone, Gregory Ethan – 1994
The quality of fit between the data and the measurement model is fundamental to any discussion of results. Fit has been the subject of inquiry since as early as the 1920s. Most early explorations concentrated on assessing global fit or subset fits on fixed length, traditional paper and pencil tests given as a single unit. The detection of aberrant…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Assessment, Educational History
Bergstrom, Betty; And Others – 1994
Examinee response times from a computerized adaptive test taken by 204 examinees taking a certification examination were analyzed using a hierarchical linear model. Two equations were posed: a within-person model and a between-person model. Variance within persons was eight times greater than variance between persons. Several variables…
Descriptors: Adaptive Testing, Adults, Certification, Computer Assisted Testing
Parshall, Cynthia G.; Stewart, Rob; Ritter, Judy – 1996
While computer-based tests might be as simple as computerized versions of paper-and-pencil examinations, more innovative applications also exist. Examples of innovations in computer-based assessment include the use of graphics or sound, some measure of interactivity, a change in the means in which examinees responded to items, and the application…
Descriptors: College Students, Computer Assisted Testing, Educational Innovation, Graphic Arts
Braswell, James S.; Jackson, Carol A. – 1995
A new free-response item type for mathematics tests is described. The item type, referred to as the Student-Produced Response (SPR), was first introduced into the Preliminary Scholastic Aptitude Test/National Merit Scholarship Qualifying Test in 1993 and into the Scholastic Aptitude Test in 1994. Students solve a problem and record the answer by…
Descriptors: Computer Assisted Testing, Educational Assessment, Guessing (Tests), Mathematics Tests
Wise, Steven L. – 1996
In recent years, a controversy has arisen about the advisability of allowing examinees to review their test items and possibly change answers. Arguments for and against allowing item review are discussed, and issues that a test designer should consider when designing a Computerized Adaptive Test (CAT) are identified. Most CATs do not allow…
Descriptors: Achievement Gains, Adaptive Testing, Computer Assisted Testing, Error Correction
Hecht, Jeffrey B.; And Others – 1993
A method of qualitative data analysis that used computer software as a tool to help organize and analyze open-ended survey responses was examined. Reasons for using open-ended, as opposed to closed-ended questionnaire items, are discussed, as well as the construction of open-ended questions and response analysis. Because the method is based on…
Descriptors: Attitude Change, Coding, Computer Assisted Testing, Computer Software