NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 421 to 435 of 632 results Save | Export
Luecht, Richard M. – 2001
The Microsoft Certification Program (MCP) includes many new computer-based item types, based on complex cases involving the Windows 2000 (registered) operating system. This Innovative Item Technology (IIT) has presented challenges beyond traditional psychometric considerations such as capturing and storing the relevant response data from…
Descriptors: Certification, Coding, Computer Assisted Testing, Data Collection
Papanastasiou, Elena C. – 2002
Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Review (Reexamination)
Patelis, Thanos – College Entrance Examination Board, 2000
Because different types of computerized tests exist and continue to emerge, the term "computer-based testing" does not encompass all of the various models that may exist. As a result, test delivery model (TDM) is used to describe the variety of methods that exist in delivering tests to examinees. The criterion that is used to distinguish…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Delivery Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
Huba, G. J. – Educational and Psychological Measurement, 1986
The runs test for random sequences of responding is proposed for application in long inventories with dichotomous items as an index of sterotyped responding. This index is useful for detecting whether the client shifts between response alternatives more or less frequently than would be expected by chance. (LMO)
Descriptors: Computer Assisted Testing, Personality Measures, Response Style (Tests), Scoring
Lippey, Gerald; Partos, Nathan – Educational Technology, 1976
This article describes some of the programming improvements made to a computer-based instructional support system after it was operating as had been envisioned during its design. Changes were made to accommodate differences between the original objectives and what users discovered they really wished to do. (Author/BD)
Descriptors: Computer Assisted Testing, Computer Programs, Item Banks, Test Construction
Wise, Steven L. – 1999
Outside of large-scale testing programs, the computerized adaptive test (CAT) has thus far had only limited impact on measurement practice. In smaller-scale testing contexts, limited data are often available, which precludes the establishment of calibrated item pools for use by traditional (i.e., item response theory (IRT) based) CATs. This paper…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Scores
Anderson, Richard Ivan – Journal of Computer-Based Instruction, 1982
Describes confidence testing methods (confidence weighting, probabilistic marking, multiple alternative selection) as alternative to computer-based, multiple choice tests and explains potential benefits (increased reliability, improved examinee evaluation of alternatives, extended diagnostic information and remediation prescriptions, happier…
Descriptors: Computer Assisted Testing, Confidence Testing, Multiple Choice Tests, Probability
Peer reviewed Peer reviewed
Koch, William R.; Dodd, Barbara G. – Applied Measurement in Education, 1989
Various aspects of the computerized adaptive testing (CAT) procedure for partial credit scoring were manipulated, focusing on the effects of the manipulations on operational characteristics of the CAT. The effects of item-pool size, item-pool information, and stepsizes used along the trait continuum were assessed. (TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
McMinn, Mark R.; Ellens, Brent M.; Soref, Erez – Assessment, 1999
Surveyed 364 members of the Society for Personality Assessment to determine how they use computer-based test interpretation software (CBTI) in their work, and their perspectives on the ethics of using CBTI. Psychologists commonly use CBTI for test scoring, but not to formulate a case or as an alternative to a written report. (SLD)
Descriptors: Behavior Patterns, Computer Assisted Testing, Computer Software, Ethics
Peer reviewed Peer reviewed
Wang, LihShing; Li, Chun-Shan – Journal of Applied Measurement, 2001
Used Monte Carlo simulation to compare the relative measurement efficiency of polytomous modeling and dichotomous modeling under different scoring schemes and termination criteria. Results suggest that polytomous computerized adaptive testing (CAT) yields marginal gains over dichotomous CAT when termination criteria are more stringent. Discusses…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Scharber, Cassandra; Dexter, Sara; Riedel, Eric – Journal of Technology, Learning, and Assessment, 2008
The purpose of this research is to analyze preservice teachers' use of and reactions to an automated essay scorer used within an online, case-based learning environment called ETIPS. Data analyzed include post-assignment surveys, a user log of students' actions within the cases, instructor-assigned scores on final essays, and interviews with four…
Descriptors: Test Scoring Machines, Essays, Student Experience, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Ramakishnan, Sadhu Balasundaram; Ramadoss, Balakrishnan – International Journal on E-Learning, 2009
Over the past several decades, a wider range of assessment strategies has gained prominence in classrooms, including complex assessment items such as individual or group projects, student journals and other creative writing tasks, graphic/artistic representations of knowledge, clinical interviews, student presentations and performances, peer- and…
Descriptors: Evaluation Problems, Web Based Instruction, Program Effectiveness, Internet
Peer reviewed Peer reviewed
Walker, N. William; Myrick, Carolyn Cobb – Journal of School Psychology, 1985
Ethical considerations in use of computers in psychological testing and assessment are discussed. Existing ethics and standards that provide guidance to users of computerized test interpretation and report-writing programs are discussed and guidelines are suggested. Areas of appropriate use of computers in testing and assessment are explored.…
Descriptors: Accountability, Computer Assisted Testing, Confidentiality, Ethics
Pages: 1  |  ...  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  ...  |  43