NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 427 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Musa Adekunle Ayanwale; Mdutshekelwa Ndlovu – Journal of Pedagogical Research, 2024
The COVID-19 pandemic has had a significant impact on high-stakes testing, including the national benchmark tests in South Africa. Current linear testing formats have been criticized for their limitations, leading to a shift towards Computerized Adaptive Testing [CAT]. Assessments with CAT are more precise and take less time. Evaluation of CAT…
Descriptors: Adaptive Testing, Benchmarking, National Competency Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baryktabasov, Kasym; Jumabaeva, Chinara; Brimkulov, Ulan – Research in Learning Technology, 2023
Many examinations with thousands of participating students are organized worldwide every year. Usually, this large number of students sit the exams simultaneously and answer almost the same set of questions. This method of learning assessment requires tremendous effort and resources to prepare the venues, print question books and organize the…
Descriptors: Information Technology, Computer Assisted Testing, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozsoy, Seyma Nur; Kilmen, Sevilay – International Journal of Assessment Tools in Education, 2023
In this study, Kernel test equating methods were compared under NEAT and NEC designs. In NEAT design, Kernel post-stratification and chain equating methods taking into account optimal and large bandwidths were compared. In the NEC design, gender and/or computer/tablet use was considered as a covariate, and Kernel test equating methods were…
Descriptors: Equated Scores, Testing, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghio, Fernanda Belén; Bruzzone, Manuel; Rojas-Torres, Luis; Cupani, Marcos – European Journal of Science and Mathematics Education, 2022
In the last decades, the development of computerized adaptive testing (CAT) has allowed more precise measurements with a smaller number of items. In this study, we develop an item bank (IB) to generate the adaptive algorithm and simulate the functioning of CAT to assess the domains of mathematical knowledge in Argentinian university students…
Descriptors: Test Items, Item Banks, Adaptive Testing, Mathematics Tests
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuda, Jun-ichiro; Hull, Michael M.; Mae, Naohiro – Physical Review Physics Education Research, 2022
This paper presents improvements made to a computerized adaptive testing (CAT)-based version of the FCI (FCI-CAT) in regards to test security and test efficiency. First, we will discuss measures to enhance test security by controlling for item overexposure, decreasing the risk that respondents may (i) memorize the content of a pretest for use on…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Risk Management
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
Developments in the field of education have significantly affected test development processes, and computer-based test applications have been started in many institutions. In our country, research on the application of measurement and evaluation tools in the computer environment for use with distance education is gaining momentum. A large pool of…
Descriptors: Turkish, Literature, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Mehri Izadi; Maliheh Izadi; Farrokhlagha Heidari – Education and Information Technologies, 2024
In today's environment of growing class sizes due to the prevalence of online and e-learning systems, providing one-to-one instruction and feedback has become a challenging task for teachers. Anyhow, the dialectical integration of instruction and assessment into a seamless and dynamic activity can provide a continuous flow of assessment…
Descriptors: Adaptive Testing, Computer Assisted Testing, English (Second Language), Second Language Learning
Santi Lestari – Research Matters, 2025
The ability to draw visual representations such as diagrams and graphs is considered fundamental to science learning. Science exams therefore often include questions which require students to draw a visual representation, or to augment a partially provided one. The design features of such questions (e.g., layout of diagrams, amount of answer…
Descriptors: Science Education, Secondary Education, Visual Aids, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Ondrej Klíma; Martin Lakomý; Ekaterina Volevach – International Journal of Social Research Methodology, 2024
We tested the impacts of Hofstede's cultural factors and mode of administration on item nonresponse (INR) for political questions in the European Values Study (EVS). We worked with the integrated European Values Study dataset, using descriptive analysis and multilevel binary logistic regression models. We concluded that (1) modes of administration…
Descriptors: Cultural Influences, Testing, Test Items, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Pearson, Christopher; Penna, Nigel – Assessment & Evaluation in Higher Education, 2023
E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering…
Descriptors: Automation, Scoring, Engineering Education, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Carolyn Clarke – in education, 2024
This ethnographic case study, situated in Newfoundland and Labrador, Canada, examined the effects of full-scale provincial testing on families, its influences on homework, and familial accountability for teaching and learning. Data were drawn from family interviews, as well as letters and documents regarding homework. Teachers sensed a significant…
Descriptors: Academic Standards, Accountability, Testing, Homework
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  29