NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 691 to 705 of 832 results Save | Export
Peer reviewed Peer reviewed
Searls, Donald T.; And Others – Journal of Experimental Education, 1990
Indices that detail aspects of student test responses include overall aberrancy; tendencies to miss relatively easy items; tendencies to correctly answer more difficult items; and a combination that indicates how the latter tendencies balance each other. Mathematics test results for 368 college students illustrate the indices. (SLD)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Response Style (Tests)
Peer reviewed Peer reviewed
Andrich, David – Psychometrika, 1995
This book discusses adapting pencil-and-paper tests to computerized testing. Mention is made of models for graded responses to items and of possibilities beyond pencil-and-paper-tests, but the book is essentially about dichotomously scored test items. Contrasts between item response theory and classical test theory are described. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Scores
American Language Review, 1998
Provides information and strategies for helping language teachers know how to prepare students for the new computerized version of the Test of English as a Foreign Language. Information focuses on changes in scoring and test format. (Author/VWL)
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Scores
Peer reviewed Peer reviewed
Waterhouse, Julie Keith; Beeman, Pamela Butler – Journal of Professional Nursing, 2001
Discriminant analysis was used to identify variables predictive of success in the computerized National Council Licensure Examination for Registered Nurses with data from 289 nursing graduates. Using seven significant predictors, 94% of passes and 92% of failures were correctly identified. (Contains 23 references.) (SK)
Descriptors: Computer Assisted Testing, Discriminant Analysis, Higher Education, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Direct linkDirect link
Kolen, Michael J. – Educational Assessment, 1999
Develops a conceptual framework that addresses score comparability for performance assessments, adaptive tests, paper-and-pencil tests, and alternate item pools for computerized tests. Outlines testing situation aspects that might threaten score comparability and describes procedures for evaluating the degree of score comparability. Suggests ways…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Performance Based Assessment
Olson, Allan – American School Board Journal, 2000
The Northwest Evaluation Association, serving over 300 U.S. school districts, is developing an Internet-enabled assessment system that adapts questions to each student's performance. Shorter, adaptive tests help students avoid frustrations or boredom caused by too-difficult or -easy questions. Scores are as valid as traditional test scores. (MLH)
Descriptors: Accountability, Achievement Tests, Computer Assisted Testing, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tianyou; Zhang, Jiawei – Psychometrika, 2006
This paper deals with optimal partitioning of limited testing time in order to achieve maximum total test score. Nonlinear optimization theory was used to analyze this problem. A general case using a generic item response model is first presented. A special case that applies a response time model proposed by Wang and Hanson (2005) is also…
Descriptors: Reaction Time, Testing, Scores, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rotou, Ourania; Patsula, Liane; Steffen, Manfred; Rizavi, Saba – ETS Research Report Series, 2007
Traditionally, the fixed-length linear paper-and-pencil (P&P) mode of administration has been the standard method of test delivery. With the advancement of technology, however, the popularity of administering tests using adaptive methods like computerized adaptive testing (CAT) and multistage testing (MST) has grown in the field of measurement…
Descriptors: Comparative Analysis, Test Format, Computer Assisted Testing, Models
Hoover, Robert M. – 1988
This digest on test uses in counseling discusses the selection, administration, and scoring of tests; the interpretation of test results; and communication of results to clients. It examines such issues in testing as confidentiality, counselor preparation, client involvement in the testing process, computerized testing, and ethics. (NB)
Descriptors: Computer Assisted Testing, Confidentiality, Counseling Techniques, Counselor Qualifications
Peer reviewed Peer reviewed
Jamieson, Joan; And Others – System, 1993
The value of open-ended responses for computer-assisted-language-learning lessons and language tests is asserted. Results from a study in which students' notes and recall protocols of computerized reading passages were scored by both people and computers indicate that computer programs can score reliably with people, and in less time. (45…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Language Tests, Reading Comprehension
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kong, Xiaojing – Applied Measurement in Education, 2005
When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method, based on item response time, for measuring examinee test-taking effort on computer-based test items. This measure, termed…
Descriptors: Psychometrics, Validity, Reaction Time, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hargreaves, Melanie; Shorrocks-Taylor, Diane; Swinnerton, Bronwen; Tait, Kenneth; Threlfall, John – Educational Research, 2004
This paper reports on the results of a study of English children's performance on a computer mathematics assessment compared with a pencil-and-paper assessment. Two matched samples of children were each assessed on one of two mathematics pencil-and-paper tests and assessed a month later on a cloned computer test. The performance scores were better…
Descriptors: Foreign Countries, Elementary School Mathematics, Elementary School Students, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Brosvic, Gary M.; Epstein, Michael L.; Dihoff, Roberta E.; Cook, Michael L. – Psychological Record, 2006
The present studies were undertaken to examine the effects of manipulating delay-interval task (Study 1) and timing of feedback (Study 2) on acquisition and retention. Participants completed a 100-item cumulative final examination, which included 50 items from each laboratory examination, plus 50 entirely new items. Acquisition and retention were…
Descriptors: Individual Testing, Multiple Choice Tests, Feedback, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Balajthy, Ernest – Reading Teacher, 2007
Computer-based technologies offer promise as a means to assess students and provide teachers with better understandings of their students' achievement. This article describes recent developments in computer-based and web-based reading and literacy assessment, focusing on assessment administration, information management, and report creation. In…
Descriptors: Student Evaluation, Computer Uses in Education, Computer Software, Computer Assisted Testing
Pages: 1  |  ...  |  43  |  44  |  45  |  46  |  47  |  48  |  49  |  50  |  51  |  ...  |  56