NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,966 to 4,980 of 7,091 results Save | Export
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Lau, C. Allen; Wang, Tianyou – 1998
The purposes of this study were to: (1) extend the sequential probability ratio testing (SPRT) procedure to polytomous item response theory (IRT) models in computerized classification testing (CCT); (2) compare polytomous items with dichotomous items using the SPRT procedure for their accuracy and efficiency; (3) study a direct approach in…
Descriptors: Computer Assisted Testing, Cutting Scores, Item Response Theory, Mastery Tests
Mislevy, Robert J. – Center for Research on Evaluation Standards and Student Testing CRESST, 2004
In this paper we provide a rationale and approach for articulating a conceptual framework and corresponding development resources to guide the design of science inquiry assessments. Important here is attention to how and why research on cognition and learning, advances in technological capability, and development of sophisticated methods and…
Descriptors: Science, Test Construction, Student Evaluation, Science Tests
Bishop, Dan – InCider, 1983
Following a discussion of drill/practice and tutorial programing techniques (SE 533 144), this part focuses on techniques dealing with text problems. Various listings are included to demonstrate such methods as the READ/DATA approach in presenting questions to students. (JN)
Descriptors: Computer Assisted Testing, Computer Programs, Elementary Secondary Education, Instructional Materials
Peer reviewed Peer reviewed
Zatz, Joel L. – American Journal of Pharmaceutical Education, 1982
A method for computer grading pharmaceutical calculations exams in which students convert their answers into scientific notation and enter their solutions onto a mark sense form is described. A table is generated and then posted listing student identification numbers, exam grades, and which problems were missed. (Author/MLW)
Descriptors: Computation, Computer Assisted Testing, Computer Programs, Grading
Peer reviewed Peer reviewed
Anderson, Jonathan – Journal of Research in Reading, 1983
Reports a number of modifications to the computer readability program STAR (Simple Tests Approach to Readability) designed to make it more useful. (FL)
Descriptors: Computer Assisted Testing, Content Analysis, Readability, Readability Formulas
Jelden, D. L. – AEDS Monitor, 1982
Describes a procedure for using the computer to assist in evaluating the progress of students on pretests, unit tests, posttests or a combination of tests. The use of computers to evaluate cognitive objectives of a course is examined. Twenty-four references are listed. (MER)
Descriptors: Cognitive Tests, Computer Assisted Testing, Criterion Referenced Tests, Flow Charts
Peer reviewed Peer reviewed
Schaefer, Edward; Marschall, Laurence A. – American Journal of Physics, 1980
Describes an easy-to-use set of computer programs for the generation of multiple-choice and essay examinations in an introductory astronomy course. Program enables the user to establish files of test questions and to rapidly assemble printed copies of examinations suitable for photocopying. (Author/GS)
Descriptors: Astronomy, College Science, Computer Assisted Testing, Computer Programs
Proctor, Andrew J. – Journal of Physical Education and Recreation, 1980
As computers become increasingly available to public schools, physical education teachers and coaches will have access to the many services and conveniences the computer offers. Physical education majors should be kept current with technology that affects their professional development. (CJ)
Descriptors: Computer Assisted Testing, Course Content, Higher Education, Measurement Equipment
Peer reviewed Peer reviewed
Roos, Linda L.; And Others – Educational and Psychological Measurement, 1996
This article describes Minnesota Computerized Adaptive Testing Language program code for using the MicroCAT 3.5 testing software to administer several types of self-adapted tests. Code is provided for: a basic self-adapted test; a self-adapted version of an adaptive mastery test; and a restricted self-adapted test. (Author/SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Programming
Peer reviewed Peer reviewed
Gallagher, Ann; Bennett, Randy Elliot; Cahalan, Cara; Rock, Donald A. – Educational Assessment, 2002
Evaluated whether variance due to computer-based presentation was associated with performance on a new constructed-response type, Mathematical Expression, that requires students to enter expressions. No statistical evidence of construct-irrelevant variance was detected for the 178 undergraduate and graduate students, but some examinees reported…
Descriptors: College Students, Computer Assisted Testing, Constructed Response, Educational Technology
Peer reviewed Peer reviewed
Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Potenza, Maria T.; Stocking, Martha L. – Journal of Educational Measurement, 1997
Common strategies for dealing with flawed items in conventional testing, grounded in principles of fairness to examinees, are re-examined in the context of adaptive testing. The additional strategy of retesting from a pool cleansed of flawed items is found, through a Monte Carlo study, to bring about no practical improvement. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Monte Carlo Methods
Peer reviewed Peer reviewed
Levinson, Edward M.; Zeman, Heather L.; Ohler, Denise L. – Career Development Quarterly, 2002
Assesses the reliability and validity of the Web-based version of the Career Key. Participants completed the Web-based version of the Career Key and the Self-Directed Search-Form R and completed a second Career Key administration 2 weeks later. Test-retest reliability ranged between .75 and .84. With the exception of the conventional scale, all…
Descriptors: Career Counseling, Computer Assisted Testing, Concurrent Validity, Test Reliability
Pages: 1  |  ...  |  328  |  329  |  330  |  331  |  332  |  333  |  334  |  335  |  336  |  ...  |  473