NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Descriptive14
Journal Articles13
Speeches/Meeting Papers1
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Benjamin; van Rijn, Peter; Molenaar, Dylan; Debeer, Dries – Assessment & Evaluation in Higher Education, 2022
A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue…
Descriptors: Information Security, High Stakes Tests, Computer Security, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Harring, Jeffrey R.; Johnson, Tessa L. – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Jeffrey Harring and Ms. Tessa Johnson introduce the linear mixed effects (LME) model as a flexible general framework for simultaneously modeling continuous repeated measures data with a scientifically defensible function that adequately summarizes both individual change as well as the average response. The module…
Descriptors: Educational Assessment, Data Analysis, Longitudinal Studies, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Gregg, Nikole; Leventhal, Brian C. – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Nikole Gregg and Dr. Brian Leventhal discuss strategies to ensure data visualizations achieve graphical excellence. Data visualizations are commonly used by measurement professionals to communicate results to examinees, the public, educators, and other stakeholders. To do so effectively, it is important that these…
Descriptors: Data Analysis, Evidence Based Practice, Visualization, Test Results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Chu-Fu; Lin, Chih-Lung; Deng, Jien-Han – Turkish Online Journal of Educational Technology - TOJET, 2012
Testing is an important stage of teaching as it can assist teachers in auditing students' learning results. A good test is able to accurately reflect the capability of a learner. Nowadays, Computer-Assisted Testing (CAT) is greatly improving traditional testing, since computers can automatically and quickly compose a proper test sheet to meet user…
Descriptors: Simulation, Test Items, Student Evaluation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wauters, K.; Desmet, P.; Van den Noortgate, W. – Journal of Computer Assisted Learning, 2010
The popularity of intelligent tutoring systems (ITSs) is increasing rapidly. In order to make learning environments more efficient, researchers have been exploring the possibility of an automatic adaptation of the learning environment to the learner or the context. One of the possible adaptation techniques is adaptive item sequencing by matching…
Descriptors: Knowledge Level, Adaptive Testing, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chatzopoulou, D. I.; Economides, A. A. – Journal of Computer Assisted Learning, 2010
This paper presents Programming Adaptive Testing (PAT), a Web-based adaptive testing system for assessing students' programming knowledge. PAT was used in two high school programming classes by 73 students. The question bank of PAT is composed of 443 questions. A question is classified in one out of three difficulty levels. In PAT, the levels of…
Descriptors: Student Evaluation, Prior Learning, Programming, High School Students
Liu, Chao-Lin; Lin, Jen-Hsiang; Wang, Yu-Chun – Online Submission, 2010
The authors report an implemented environment for computer-assisted authoring of test items and provide a brief discussion about the applications of NLP techniques for computer assisted language learning. Test items can serve as a tool for language learners to examine their competence in the target language. The authors apply techniques for…
Descriptors: Cloze Procedure, Listening Comprehension, Test Items, Foreign Countries
Peer reviewed Peer reviewed
van der Linden, Wim J.; Adema, Jos J. – Journal of Educational Measurement, 1998
Proposes an algorithm for the assembly of multiple test forms in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. Illustrates how the method can be implemented using 0-1 linear programming and gives two examples. (SLD)
Descriptors: Algorithms, Linear Programming, Test Construction, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Hwang, Gwo-Jen; Lin, Bertrand M. T.; Lin, Tsung-Liang – Computers and Education, 2006
A well-constructed test sheet not only helps the instructor evaluate the learning status of the students, but also facilitates the diagnosis of the problems embedded in the students' learning process. This paper addresses the problem of selecting proper test items to compose a test sheet that conforms to such assessment requirements as average…
Descriptors: Test Items, Item Banks, Student Evaluation, Difficulty Level
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 2003
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Linear Programming