Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 10 |
Descriptor
Computer Assisted Testing | 21 |
Evaluation Criteria | 14 |
Test Construction | 7 |
Evaluation Methods | 6 |
Computer Software | 5 |
Criteria | 5 |
Scoring | 4 |
Educational Technology | 3 |
Foreign Countries | 3 |
Student Evaluation | 3 |
Test Items | 3 |
More ▼ |
Source
Author
Publication Type
Reports - Descriptive | 21 |
Journal Articles | 18 |
Reports - Research | 2 |
Speeches/Meeting Papers | 2 |
Guides - Non-Classroom | 1 |
Opinion Papers | 1 |
Education Level
Elementary Secondary Education | 3 |
Higher Education | 2 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Teachers | 2 |
Practitioners | 1 |
Researchers | 1 |
Location
Nebraska | 1 |
United Kingdom (Scotland) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Law School Admission Test | 1 |
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Rotou, Ourania; Rupp, André A. – ETS Research Report Series, 2020
This research report provides a description of the processes of evaluating the "deployability" of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose…
Descriptors: Computer Assisted Testing, Scoring, Educational Assessment, Psychometrics
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Morris, Allison – OECD Publishing (NJ1), 2011
This report discusses the most relevant issues concerning student standardised testing in which there are no-stakes for students ("standardised testing") through a literature review and a review of the trends in standardised testing in OECD countries. Unlike standardised tests in which there are high-stakes for students, no-stakes implies that…
Descriptors: Standardized Tests, Testing, Educational Trends, Educational Research
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Vidotto, G.; Massidda, D.; Noventa, S. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Descriptors: Interaction, Computation, Computer Assisted Testing, Computer Software
Morrow, James R., Jr.; Zhu, Weimo; Franks, B. Don; Meredith, Marilu D.; Spain, Christine – Research Quarterly for Exercise and Sport, 2009
The AAHPER Youth Fitness Test, the first U.S. national fitness test, was published 50 years ago. The seminal work of Krause and Hirschland influenced the fitness world and continues to do so today. Important youth fitness test initiatives in the last half century are summarized. Key elements leading to continued interest in youth fitness testing…
Descriptors: Physical Fitness, Children, Adolescents, Educational History
Hwang, Gwo-Jen; Chu, Hui-Chun; Yin, Peng-Yeng; Lin, Ji-Yu – Computers & Education, 2008
The national certification tests and entrance examinations are the most important tests for proving the ability or knowledge level of a person. To accurately evaluate the professional skills or knowledge level, the composed test sheets must meet multiple assessment criteria such as the ratio of relevant concepts to be evaluated and the estimated…
Descriptors: Item Banks, Knowledge Level, Educational Testing, Evaluation Criteria

Leucht, Richard M. – Applied Psychological Measurement, 1998
Presents a variation of a "greedy" algorithm that can be used in test-assembly problems. The algorithm, the normalized weighted absolute-deviation heuristic, selects items to have a locally optimal fit to a moving set of average criterion values. Demonstrates application of the model. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Criteria, Heuristics
Armstrong, Ronald D.; Jones, Douglas H.; Koppel, Nicole B.; Pashley, Peter J. – Applied Psychological Measurement, 2004
A multiple-form structure (MFS) is an ordered collection or network of testlets (i.e., sets of items). An examinee's progression through the network of testlets is dictated by the correctness of an examinee's answers, thereby adapting the test to his or her trait level. The collection of paths through the network yields the set of all possible…
Descriptors: Law Schools, Adaptive Testing, Computer Assisted Testing, Test Format
Steckelberg, Allen L.; Li, Lan; Liu, Xiongyi; Kozak, Mike – Computers in the Schools, 2008
This article describes the development of a Web-based instrument that is part of a strategic planning initiative in technology in K-12 schools in Nebraska. The instrument provides rubrics for self-assessment of essential conditions necessary for integrating and adopting of technology. Essential conditions were defined by an extended panel of…
Descriptors: Strategic Planning, Needs Assessment, Effect Size, Educational Technology
Lunz, Mary E. – 1997
This paper explains the multifacet technology for analyzing performance examinations and the fair average method of setting criterion standards. The multidimensional nature of performance examinations requires that multiple and often different facets elements of a candidate's examination form be accounted for in the analysis. After this is…
Descriptors: Ability, Computer Assisted Testing, Criteria, Educational Technology
MacDonald, Kim; Nielsen, Jean; Lai, Lisa – TESL Canada Journal, 2004
With the growing demand for and use of computer-based language tests (CBLTs) comes the need for clear guidelines to help educators as they attempt to select appropriate tests to assess their students with respect to their second- and foreign-language (L2/FL) teaching-learning goals. The purpose of this article is to provide guidelines to educators…
Descriptors: Language Tests, Guidelines, Language Proficiency, Computer Assisted Testing
Karras, Bryant T.; Tufano, James T. – Evaluation and Program Planning, 2006
This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…
Descriptors: Health Services, Evaluators, Health Promotion, Internet

Marshall, Stewart; Barron, Colin – System, 1987
MARC (Methodical Assessment of Reports by Computer) is a report-marking program which enables teachers to provide individualized feedback on reports written by engineering students. The MARC system is objective in its consistent application of the same programed criteria, but also allows individual markers to supply their own comments as required.…
Descriptors: Computer Assisted Testing, Computer Software, Engineering, Evaluation Criteria
Previous Page | Next Page »
Pages: 1 | 2