NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yanyan Fu – Educational Measurement: Issues and Practice, 2024
The template-based automated item-generation (TAIG) approach that involves template creation, item generation, item selection, field-testing, and evaluation has more steps than the traditional item development method. Consequentially, there is more margin for error in this process, and any template errors can be cascaded to the generated items.…
Descriptors: Error Correction, Automation, Test Items, Test Construction
Jonathan Seiden – Annenberg Institute for School Reform at Brown University, 2025
Direct assessments of early childhood development (ECD) are a cornerstone of research in developmental psychology and are increasingly used to evaluate programs and policies in lower- and middle-income countries. Despite strong psychometric properties, these assessments are too expensive and time consuming for use in large-scale monitoring or…
Descriptors: Young Children, Child Development, Performance Based Assessment, Developmental Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Tran, Tich Phuoc; Meacheam, David – IEEE Transactions on Learning Technologies, 2020
The use of learning management systems (LMSs) for learning and knowledge sharing has accelerated quickly both in education and corporate worlds. Despite the benefits brought by LMSs, the current systems still face significant challenges, including the lack of automation in generating quiz questions and managing courses. Over the past decade, more…
Descriptors: Integrated Learning Systems, Test Construction, Test Items, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Vie, Jill-Jênn; Popineau, Fabrice; Bruillard, Éric; Bourda, Yolaine – International Journal of Artificial Intelligence in Education, 2018
In large-scale assessments such as the ones encountered in MOOCs, a lot of usage data is available because of the number of learners involved. Newcomers, that just arrive on a MOOC, have various backgrounds in terms of knowledge, but the platform hardly knows anything about them. Therefore, it is crucial to elicit their knowledge fast, in order to…
Descriptors: Automation, Test Construction, Measurement, Online Courses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2013
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
Descriptors: Educational Assessment, Test Items, Automation, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
New Meridian Corporation, 2020
The purpose of this report is to describe the technical qualities of the 2018-2019 operational administration of the English language arts/literacy (ELA/L) and mathematics summative assessments in grades 3 through 8 and high school. The ELA/L assessments focus on reading and comprehending a range of sufficiently complex texts independently and…
Descriptors: Language Arts, Literacy Education, Mathematics Education, Summative Evaluation
New Meridian Corporation, 2020
The purpose of this report is to describe the technical qualities of the 2018-2019 operational administration of the English language arts/literacy (ELA/L) and mathematics assessments in grades 3 through 8 and high school. New Meridian, in coordination with multiple states and vendors, developed an alternate form of the summative assessment to…
Descriptors: Language Arts, Literacy Education, Mathematics Education, Summative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui – Educational Researcher, 2013
We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…
Descriptors: Achievement Tests, Automation, Test Construction, Alignment (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Embretson, Susan E. – Measurement: Interdisciplinary Research and Perspectives, 2004
The last century was marked by dazzling changes in many areas, such as technology and communications. Predictions into the second century of testing are seemingly difficult in such a context. Yet, looking back to the turn of the last century, Kirkpatrick (1900), in his American Psychological Association presidential address, presented fundamental…
Descriptors: Ability, Testing, Futures (of Society), Psychometrics
Leuba, Richard J. – Engineering Education, 1986
Promotes the use of machine-scored tests in basic engineering science classes. Discusses some principles and practices of machine-scored testing. Provides several example test items. Argues that such tests can be used to enhance basic understanding of concepts and problem solving skills. (TW)
Descriptors: Automation, College Science, Engineering Education, Evaluation Methods