NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hedlefs-Aguilar, Maria Isolde; Morales-Martinez, Guadalupe Elizabeth; Villarreal-Lozano, Ricardo Jesus; Moreno-Rodriguez, Claudia; Gonzalez-Rodriguez, Erick Alejandro – European Journal of Educational Research, 2021
This study explored the cognitive mechanism behind information integration in the test anxiety judgments in 140 engineering students. An experiment was designed to test four factors combined (test goal orientation, test cognitive functioning level, test difficulty and test mode). The experimental task required participants to read 36 scenarios,…
Descriptors: Test Anxiety, Engineering Education, Algebra, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, Yizhou; Lehman, James – Journal of Educational Computing Research, 2020
This study implemented a data-driven approach to identify Chinese high school students' common errors in a Java-based introductory programming course using the data in an automated assessment tool called the Mulberry. Students' error-related behaviors were also analyzed, and their relationships to success in introductory programming were…
Descriptors: High School Students, Error Patterns, Introductory Courses, Computer Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James – Educational Measurement: Issues and Practice, 2019
As computer-based tests become more common, there is a growing wealth of metadata related to examinees' response processes, which include solution strategies, concentration, and operating speed. One common type of metadata is item response time. While response times have been used extensively to improve estimates of achievement, little work…
Descriptors: Test Items, Item Response Theory, Metadata, Self Efficacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Porter, Tenelle; Molina, Diego Catalán; Blackwell, Lisa; Roberts, Sylvia; Quirk, Abigail; Duckworth, Angela L.; Trzesniewski, Kali – Journal of Learning Analytics, 2020
Mastery behaviours -- seeking out challenging tasks and continuing to work on them despite difficulties -- are integral to achievement but difficult to measure with precision. The current study reports on the development and validation of the computer-based persistence, effort, resilience, and challenge-seeking (PERC) task in two demographically…
Descriptors: Mastery Learning, Resilience (Psychology), Difficulty Level, Computer Assisted Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Solnyshkina, Marina I.; Zamaletdinov, Radif R.; Gorodetskaya, Ludmila A.; Gabitov, Azat I. – Journal of Social Studies Education Research, 2017
The article presents the results of an exploratory study of the use of T.E.R.A., an automated tool measuring text complexity and readability based on the assessment of five text complexity parameters: narrativity, syntactic simplicity, word concreteness, referential cohesion and deep cohesion. Aimed at finding ways to utilize T.E.R.A. for…
Descriptors: Readability Formulas, Readability, Foreign Countries, Computer Assisted Testing
Adjei, Seth A.; Botelho, Anthony F.; Heffernan, Neil T. – Grantee Submission, 2016
Prerequisite skill structures have been closely studied in past years leading to many data-intensive methods aimed at refining such structures. While many of these proposed methods have yielded success, defining and refining hierarchies of skill relationships are often difficult tasks. The relationship between skills in a graph could either be…
Descriptors: Prediction, Learning Analytics, Attribution Theory, Prerequisites
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dizon, Gilbert – TESL-EJ, 2016
The Internet has made it possible for teachers to administer online assessments with affordability and ease. However, little is known about Japanese English as a Foreign Language (EFL) students' attitudes of internet-based tests (IBTs). Therefore, this study aimed to measure the perceptions of IBTs among Japanese English language learners with the…
Descriptors: Internet, Foreign Countries, English (Second Language), Technology Uses in Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott A.; Yang, Hae Sung; McNamara, Danielle S. – Reading in a Foreign Language, 2014
This study uses a moving windows self-paced reading task to assess both text comprehension and processing time of authentic texts and these same texts simplified to beginning and intermediate levels. Forty-eight second language learners each read 9 texts (3 different authentic, beginning, and intermediate level texts). Repeated measures ANOVAs…
Descriptors: Reading Comprehension, Reading Processes, Second Language Instruction, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Yip, Chi Kwong; Man, David W. K. – International Journal of Rehabilitation Research, 2009
This study investigates the validity of a newly developed computerized cognitive assessment system (CCAS) that is equipped with rich multimedia to generate simulated testing situations and considers both test item difficulty and the test taker's ability. It is also hypothesized that better predictive validity of the CCAS in self-care of persons…
Descriptors: Test Items, Content Validity, Predictive Validity, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Memmert, D.; Hagemann, N.; Althoetmar, R.; Geppert, S.; Seiler, D. – Research Quarterly for Exercise and Sport, 2009
This study uses three experiments with different kinds of training conditions to investigate the "easy-to-hard" principle, context interference conditions, and feedback effects for learning anticipatory skills in badminton. Experiment 1 (N = 60) showed that a training program that gradually increases the difficulty level has no advantage over the…
Descriptors: Feedback (Response), Racquet Sports, Difficulty Level, Skill Development
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson-Glenberg, Mina C. – Educational Media International, 2010
This research examined the impact of formative quizzes on e-learning designed to teach volunteers how to tutor struggling readers. Three research questions were addressed: (1) Do embedded quizzes facilitate learning of e-content? (2) Does the announcement of upcoming quizzes affect learning? (3) Does prior knowledge interact with quizzing and…
Descriptors: Electronic Learning, Prior Learning, Testing, Adult Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Topping, K. J.; Samuels, J.; Paul, T. – British Educational Research Journal, 2008
To explore whether different balances of fiction/non-fiction reading and challenge might help explain differences in reading achievement between genders, data on 45,670 pupils who independently read over 3 million books were analysed. Moderate (rather than high or low) levels of challenge were positively associated with achievement gain, but…
Descriptors: Independent Reading, Reading Achievement, Achievement Gains, Gender Differences
Previous Page | Next Page »
Pages: 1  |  2