Publication Date
In 2025 | 132 |
Since 2024 | 461 |
Since 2021 (last 5 years) | 1635 |
Since 2016 (last 10 years) | 2963 |
Since 2006 (last 20 years) | 4862 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 181 |
Researchers | 145 |
Teachers | 120 |
Policymakers | 38 |
Administrators | 36 |
Students | 15 |
Counselors | 9 |
Parents | 4 |
Media Staff | 3 |
Support Staff | 3 |
Location
Australia | 167 |
United Kingdom | 152 |
Turkey | 124 |
China | 114 |
Germany | 107 |
Canada | 105 |
Spain | 91 |
Taiwan | 88 |
Netherlands | 72 |
Iran | 68 |
United States | 67 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Does not meet standards | 5 |
Petscher, Yaacov; Mitchell, Alison M.; Foorman, Barbara R. – Reading and Writing: An Interdisciplinary Journal, 2015
A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is…
Descriptors: Computer Assisted Testing, Vocabulary, Item Response Theory, Reliability
Degiorgio, Lisa – Measurement and Evaluation in Counseling and Development, 2015
Equivalency of test versions is often assumed by counselors and evaluators. This study examined two versions, paper-pencil and computer based, of the Driver Risk Inventory, a DUI/DWI (driving under the influence/driving while intoxicated) risk assessment. An overview of computer-based testing and standards for equivalency is also provided. Results…
Descriptors: Risk Assessment, Drinking, Computer Assisted Testing, Measures (Individuals)
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Cropley, David; Cropley, Arthur – Educational Technology, 2016
Computer-assisted assessment (CAA) is problematic when it comes to fostering creativity, because in educational thinking the essence of creativity is not finding the correct answer but generating novelty. The idea of "functional" creativity provides rubrics that can serve as the basis for forms of CAA leading to either formative or…
Descriptors: Creativity, Creativity Tests, Formative Evaluation, Computer Assisted Testing
Makransky, Guido; Dale, Philip S.; Havmose, Philip; Bleses, Dorthe – Journal of Speech, Language, and Hearing Research, 2016
Purpose: This study investigated the feasibility and potential validity of an item response theory (IRT)-based computerized adaptive testing (CAT) version of the MacArthur-Bates Communicative Development Inventory: Words & Sentences (CDI:WS; Fenson et al., 2007) vocabulary checklist, with the objective of reducing length while maintaining…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Language Tests
Kane, Michael T.; Tannenbaum, Richard J. – Measurement: Interdisciplinary Research and Perspectives, 2016
It is one thing to produce an innovative, construct-based assessment task; it's another to produce 10 a year that are comparable in difficulty, measure the same competencies, are free of differential item functioning, and can be scaled and equated. These challenges contributed to the failure of the performance (or authentic) assessment movement of…
Descriptors: Computer System Design, Computer Assisted Testing, Courseware, Science Education
Donald, Ellen Kroog – ProQuest LLC, 2016
Despite the increasing number of individuals taking computer-based tests, little is known about how examinees perceive computer-based testing environments and the extent to which these testing environments are perceived to affect test performance. The purpose of the present study was to assess the testing environment as perceived by individuals…
Descriptors: Licensing Examinations (Professions), Computer Assisted Testing, Physical Therapy, Allied Health Personnel
Godwin-Jones, Robert – Language Learning & Technology, 2018
This article provides an update to the author's overview of developments in second language (L2) online writing that he wrote in 2008. There has been renewed interest in L2 writing through the wide use of social media, along with the rising popularity of computer-mediated communication (CMC) and telecollaboration (class-based online exchanges).…
Descriptors: Second Language Learning, Computer Mediated Communication, Second Language Instruction, Writing Instruction
Sawaki, Yasuyo; Sinharay, Sandip – Language Testing, 2018
The present study examined the reliability of the reading, listening, speaking, and writing section scores for the TOEFL iBT® test and their interrelationship in order to collect empirical evidence to support, respectively, the "generalization" inference and the "explanation" inference in the TOEFL iBT validity argument…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Forssman, Linda; Wass, Sam V. – Child Development, 2018
This study investigated transfer effects of gaze-interactive attention training to more complex social and cognitive skills in infancy. Seventy 9-month-olds were assigned to a training group (n = 35) or an active control group (n = 35). Before, after, and at 6-week follow-up both groups completed an assessment battery assessing transfer to…
Descriptors: Visual Perception, Interpersonal Communication, Infant Behavior, Communication Skills
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain – Language Learning and Development, 2018
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Descriptors: Infants, Sign Language, Language Acquisition, Auditory Perception
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Bulut, Okan; Lei, Ming; Guo, Qi – International Journal of Research & Method in Education, 2018
Item positions in educational assessments are often randomized across students to prevent cheating. However, if altering item positions results in any significant impact on students' performance, it may threaten the validity of test scores. Two widely used approaches for detecting position effects -- logistic regression and hierarchical…
Descriptors: Alternative Assessment, Disabilities, Computer Assisted Testing, Structural Equation Models
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement