Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 16 |
Since 2016 (last 10 years) | 36 |
Since 2006 (last 20 years) | 56 |
Descriptor
Computer Assisted Testing | 57 |
Item Response Theory | 46 |
Foreign Countries | 38 |
Test Items | 23 |
Secondary School Students | 19 |
Achievement Tests | 15 |
Test Construction | 13 |
Adaptive Testing | 12 |
High School Students | 11 |
Comparative Analysis | 10 |
Language Tests | 10 |
More ▼ |
Source
Author
Chen, Li-Ju | 2 |
Dwandaru, Wipsar Sunu Brams | 2 |
Esther Ulitzsch | 2 |
Ho, Rong-Guey | 2 |
Istiyono, Edi | 2 |
Kalender, Ilker | 2 |
Kuo, Bor-Chen | 2 |
Petscher, Yaacov | 2 |
Tock, Jamie | 2 |
Yen, Yung-Chin | 2 |
Abidin, Aang Zainul | 1 |
More ▼ |
Publication Type
Journal Articles | 47 |
Reports - Research | 46 |
Reports - Evaluative | 6 |
Numerical/Quantitative Data | 3 |
Books | 2 |
Collected Works - General | 2 |
Collected Works - Proceedings | 2 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 57 |
Junior High Schools | 22 |
Middle Schools | 22 |
High Schools | 18 |
Elementary Education | 12 |
Higher Education | 9 |
Postsecondary Education | 8 |
Grade 7 | 7 |
Elementary Secondary Education | 6 |
Grade 8 | 6 |
Grade 9 | 6 |
More ▼ |
Audience
Location
Taiwan | 6 |
Indonesia | 4 |
Finland | 3 |
United Kingdom | 3 |
Australia | 2 |
China | 2 |
Hong Kong | 2 |
Malaysia | 2 |
Maryland | 2 |
Thailand | 2 |
Turkey | 2 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Program for International… | 9 |
Gates MacGinitie Reading Tests | 2 |
Measures of Academic Progress | 1 |
National Assessment of… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Luz, Yael; Yerushalmy, Michal – Journal for Research in Mathematics Education, 2023
We report on an innovative design of algorithmic analysis that supports automatic online assessment of students' exploration of geometry propositions in a dynamic geometry environment. We hypothesized that difficulties with and misuse of terms or logic in conjectures are rooted in the early exploration stages of inquiry. We developed a generic…
Descriptors: Algorithms, Computer Assisted Testing, Geometry, Mathematics Instruction
Parker, Mark A. J.; Hedgeland, Holly; Jordan, Sally E.; Braithwaite, Nicholas St. J. – European Journal of Science and Mathematics Education, 2023
The study covers the development and testing of the alternative mechanics survey (AMS), a modified force concept inventory (FCI), which used automatically marked free-response questions. Data were collected over a period of three academic years from 611 participants who were taking physics classes at high school and university level. A total of…
Descriptors: Test Construction, Scientific Concepts, Physics, Test Reliability
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Lai, Rina P. Y. – ACM Transactions on Computing Education, 2022
Computational Thinking (CT), entailing both domain-general and domain-specific skills, is a competency fundamental to computing education and beyond. However, as a cross-domain competency, appropriate assessment design and method remain equivocal. Indeed, the majority of the existing assessments have a predominant focus on measuring programming…
Descriptors: Computer Assisted Testing, Computation, Thinking Skills, Computer Science Education
Friyatmi; Mardapi, Djemari; Haryanto; Rahmi, Elvi – European Journal of Educational Research, 2020
The advancement of information and technology resulted in the change in conventional test methods. The weaknesses of the paper-based test can be minimized using the computer-based test (CBT). The development of a CBT desperately needs a computerized item bank. This study aimed to develop a computerized item bank for classroom and school-based…
Descriptors: Computer Assisted Testing, Item Banks, High School Students, Foreign Countries
von Zansen, Anna; Hilden, Raili; Laihanen, Emma – International Journal of Listening, 2022
In this study, we used the Rasch measurement to investigate the fairness of the listening section of a national computerized high-stakes English test for differential item functioning (DIF) across gender subgroups. The computerized test format inspired us to investigate whether the items measure listening comprehension differently for females and…
Descriptors: High Stakes Tests, Listening Comprehension Tests, Listening Comprehension, Gender Differences
Bijlsma, Hannah J. E.; Glas, Cees A. W.; Visscher, Adrie J. – School Effectiveness and School Improvement, 2022
Modern digital technologies enable the efficient collection and processing of student perceptions of teaching quality. However, students' ratings could be confounded by student, teacher, and classroom characteristics. We investigated students' ratings of 26 teachers who used the digital tool Impact! in their mathematics lessons with 14- and…
Descriptors: Computer Assisted Testing, Student Attitudes, Teacher Effectiveness, Student Characteristics
Kao, Yu-Ting; Kuo, Hung-Chih – Interactive Learning Environments, 2023
This study implemented the principles of dynamic assessment (DA) with computer technology, iSpring Quiz Maker, to (1) identify the English listening difficulties of 172 L2 English learners; (2) diagnose their individual learning needs, and (3) promote their future potential abilities. Upon evaluating the participating junior high school students'…
Descriptors: Listening Comprehension Tests, English (Second Language), Second Language Learning, Second Language Instruction
Sahin, Füsun; Colvin, Kimberly F. – Large-scale Assessments in Education, 2020
The item responses of examinees who rapid-guess, who do not spend enough time reading and engaging with an item, will not reflect their true ability on that item. Rapid-disengagement refers to rapidly selecting a response to multiple-choice items (i.e., rapid-guess), omitting items, or providing short-unrelated answers to open-ended items in an…
Descriptors: Guessing (Tests), Item Response Theory, Reaction Time, Learner Engagement
Ponce, Héctor R.; Mayer, Richard E.; Loyola, María Soledad – Journal of Educational Computing Research, 2021
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test…
Descriptors: Computer Assisted Testing, Test Construction, Standardized Tests, Elementary School Students
Lundgren, Erik; Eklöf, Hanna – Educational Research and Evaluation, 2020
The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and…
Descriptors: Computer Assisted Testing, Problem Solving, Response Style (Tests), Test Items
Samsudin, Mohd Ali; Chut, Thodsaphorn Som; Ismail, Mohd Erfy; Ahmad, Nur Jahan – EURASIA Journal of Mathematics, Science and Technology Education, 2020
The current assessment is demanding for a more personalised and less-time consuming testing environment. Computer Adaptive Testing (CAT) is seemed as a more effective alternative testing method in comparison to conventional test in meeting the current standard of assessment. This research reports on the calibration of the released Grade 8 Science…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Science Tests
Xin Wei – Educational Researcher, 2024
This study investigates the relationship between text-to-speech (TTS) usage and item-by-item performance in the 2017 eighth-grade National Assessment of Educational Progress (NAEP) math assessment, focusing on students with disabilities (SWDs), English language learners (ELLs), and their general education (GE) peers. Results indicate that all…
Descriptors: Assistive Technology, Students with Disabilities, English Language Learners, Regular and Special Education Relationship