NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 105 results Save | Export
Hacer Karamese – ProQuest LLC, 2022
Multistage adaptive testing (MST) has become popular in the testing industry because the research has shown that it combines the advantages of both linear tests and item-level computer adaptive testing (CAT). The previous research efforts primarily focused on MST design issues such as panel design, module length, test length, distribution of test…
Descriptors: Adaptive Testing, Scoring, Computer Assisted Testing, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baryktabasov, Kasym; Jumabaeva, Chinara; Brimkulov, Ulan – Research in Learning Technology, 2023
Many examinations with thousands of participating students are organized worldwide every year. Usually, this large number of students sit the exams simultaneously and answer almost the same set of questions. This method of learning assessment requires tremendous effort and resources to prepare the venues, print question books and organize the…
Descriptors: Information Technology, Computer Assisted Testing, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jolanta Kisielewska; Paul Millin; Neil Rice; Jose Miguel Pego; Steven Burr; Michal Nowakowski; Thomas Gale – Education and Information Technologies, 2024
Between 2018-2021, eight European medical schools took part in a study to develop a medical knowledge Online Adaptive International Progress Test. Here we discuss participants' self-perception to evaluate the acceptability of adaptive vs non-adaptive testing. Study participants, students from across Europe at all stages of undergraduate medical…
Descriptors: Medical Students, Medical Education, Student Attitudes, Self Efficacy
Peer reviewed Peer reviewed
Direct linkDirect link
Umi Laili Yuhana; Eko Mulyanto Yuniarno; Wenny Rahayu; Eric Pardede – Education and Information Technologies, 2024
In an online learning environment, it is important to establish a suitable assessment approach that can be adapted on the fly to accommodate the varying learning paces of students. At the same time, it is essential that assessment criteria remain compliant with the expected learning outcomes of the relevant education standard which predominantly…
Descriptors: Adaptive Testing, Electronic Learning, Elementary School Students, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Lumbini Barua; Barbara Lockee – TechTrends: Linking Research and Practice to Improve Learning, 2025
The article highlights the growing significance of flexible assessment in higher education as institutions adapt to the increasingly diverse needs of their student populations. The demand for customizable educational experiences, heightened by the COVID-19 pandemic, has made flexibility in assessment essential for sustaining and improving student…
Descriptors: College Faculty, College Students, Adaptive Testing, Alternative Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kurdi, Ghader; Leo, Jared; Parsia, Bijan; Sattler, Uli; Al-Emari, Salam – International Journal of Artificial Intelligence in Education, 2020
While exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing)…
Descriptors: Computer Assisted Testing, Adaptive Testing, Natural Language Processing, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Gönülates, Emre – Educational and Psychological Measurement, 2019
This article introduces the Quality of Item Pool (QIP) Index, a novel approach to quantifying the adequacy of an item pool of a computerized adaptive test for a given set of test specifications and examinee population. This index ranges from 0 to 1, with values close to 1 indicating the item pool presents optimum items to examinees throughout the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Angelone, Anna Maria; Galassi, Alessandra; Vittorini, Pierpaolo – International Journal of Learning Technology, 2022
The adoption of computerised adaptive testing (CAT) instead of classical testing (FIT) raises questions from both teachers' and students' perspectives. The scientific literature shows that teachers using CAT instead of FIT should experience shorter times to complete the assessment and obtain more precise evaluations. As for the students, adaptive…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Freshmen, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Educational Measurement: Issues and Practice, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-in-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this article, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uotinen, Sanna; Ladonlahti, Tarja; Laamanen, Merja – European Journal of Open, Distance and E-Learning, 2021
E-authentication is one of the key topics in the field of online education and e-assessment. This study was aimed at investigating the user experiences of students with special educational needs and disabilities (SEND) while developing the accessible e-authentication system for higher education institutions. Altogether, 15 students tested the…
Descriptors: Foreign Countries, Student Evaluation, Evaluation Methods, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Ling, Guangming; Qu, Yanxuan – ETS Research Report Series, 2019
Research has found that the "a"-stratified item selection strategy (STR) for computerized adaptive tests (CATs) may lead to insufficient use of high a items at later stages of the tests and thus to reduced measurement precision. A refined approach, unequal item selection across strata (USTR), effectively improves test precision over the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Use, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L. – Educational Measurement: Issues and Practice, 2015
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Descriptors: Computer Assisted Testing, Adaptive Testing, Alignment (Education), Test Content
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7