NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 991 to 1,005 of 1,057 results Save | Export
Bejar, Issac I.; Yocom, Peter – 1986
This report explores an approach to item development and psychometric modeling which explicitly incorporates knowledge about the mental models used by examinees in the solution of items into a psychometric model that characterize performances on a test, as well as incorporating that knowledge into the item development process. The paper focuses on…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Computer Science, Construct Validity
Lin, Miao-Hsiang – 1986
Specific questions addressed in this study include how time limits affect a test's construct and predictive validities, how time limits affect an examinee's time allocation and test performance, and whether the assumption about how examinees answer items is valid. Interactions involving an examinee's sex and age are studied. Two parallel forms of…
Descriptors: Age Differences, Computer Assisted Testing, Construct Validity, Difficulty Level
O'Brien, Michael; Hampilos, John P. – 1984
The feasibility of creating an item bank from a teacher-made test was examined in two comparable sections of a graduate-level introductory measurement course. The 67-item midterm examination contained multiple-choice and master matching items, which required higher level cognitive processes such as application and analysis. The feasibility of…
Descriptors: Computer Assisted Testing, Criterion Referenced Tests, Difficulty Level, Higher Education
Peer reviewed Peer reviewed
Bergstrom, Betty A.; And Others – Applied Measurement in Education, 1992
Effects of altering test difficulty on examinee ability measures and test length in a computer adaptive test were studied for 225 medical technology students in 3 test difficulty conditions. Results suggest that, with an item pool of sufficient depth and breadth, acceptable targeting to test difficulty is possible. (SLD)
Descriptors: Ability, Adaptive Testing, Change, College Students
PDF pending restoration PDF pending restoration
Wise, Steven L.; And Others – 1993
A new testing strategy that provides protection against the problem of having examinees in adaptive testing choose difficulty levels that are not matched to their proficiency levels was introduced and evaluated. The method, termed restricted self-adapted testing (RSAT), still provides examinees with a degree of control over the difficulty levels…
Descriptors: Achievement Tests, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Thommen, John D. – 1992
Testbanking provides teachers with an effective, low-cost, time-saving opportunity to improve the testing aspect of their classes. Testbanking, which involves the use of a testbank program and a computer, allows teachers to develop and generate tests and test-forms with a minimum of effort. Teachers who test using true and false, multiple choice,…
Descriptors: Adaptive Testing, Classroom Techniques, Community Colleges, Computer Assisted Testing
Terwilliger, James S. – 1990
This study was intended to establish "base-line" data with respect to teacher utilization of available microcomputer software for the purposes of: (1) generating teacher-made appraisals; (2) scoring/analyzing teacher-made appraisals; and (3) assigning and recording grades. Differences in reported utilization at the K-4, 5-8, and 9-12 grade levels…
Descriptors: Computer Assisted Testing, Computer Software, Computer Uses in Education, Elementary Secondary Education
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Huntley, Renee M.; Plake, Barbara S. – 1988
The combinational-format item (CFI)--multiple-choice item with combinations of alternatives presented as response choices--was studied to determine whether CFIs were different from regular multiple-choice items in item characteristics or in cognitive processing demands. Three undergraduate Foundations of Education classes (consisting of a total of…
Descriptors: Cognitive Processes, Computer Assisted Testing, Difficulty Level, Educational Psychology
Enger, John M. – 1988
In Arkansas, in reaction to complaints about traditional methods of selection for promotion, the civil service commission has chosen to base promotions in the police department solely on scores on locally-developed objective tests. Items developed and loaded into a computerized test bank were selected from six areas of responsibility: (1) criminal…
Descriptors: Computer Assisted Testing, Item Banks, Job Skills, Law Enforcement
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; And Others – Education, 1984
Provides guidelines for teachers writing machine-scored examinations. Explains the use of item analysis (discrimination index) to single test items that should be improved or eliminated. Discusses validity and reliability of classroom achievement tests in contrast to norm-referenced examinations. (JHZ)
Descriptors: Achievement Tests, Computer Assisted Testing, Criterion Referenced Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McGhee, Debbie E.; Lowell, Nana – New Directions for Teaching and Learning, 2003
This study compares mean ratings, inter-rater reliabilities, and the factor structure of items for online and paper student-rating forms from the University of Washington's Instructional Assessment System. (Contains 3 figures and 2 tables.)
Descriptors: Psychometrics, Factor Structure, Student Evaluation of Teacher Performance, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Tasse, Marc J.; And Others – 1994
Multilog (Thissen, 1991) was used to estimate parameters of 225 items from the Quebec Adaptive Behavior Scale (QABS). A database containing actual data from 2,439 subjects was used for the parameterization procedures. The two-parameter-logistic model was used in estimating item parameters and in the testing strategy. MicroCAT (Assessment Systems…
Descriptors: Ability, Adaptive Testing, Adjustment (to Environment), Adults
Spray, Judith A.; Reckase, Mark D. – 1994
The issue of test-item selection in support of decision making in adaptive testing is considered. The number of items needed to make a decision is compared for two approaches: selecting items from an item pool that are most informative at the decision point or selecting items that are most informative at the examinee's ability level. The first…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Pages: 1  |  ...  |  61  |  62  |  63  |  64  |  65  |  66  |  67  |  68  |  69  |  70  |  71