NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)22
Since 2006 (last 20 years)50
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Timpe-Laughlin, Veronika; Choi, Ikkyu – Language Assessment Quarterly, 2017
Pragmatics has been a key component of language competence frameworks. While the majority of second/foreign language (L2) pragmatics tests have targeted productive skills, the assessment of receptive pragmatic skills remains a developing field. This study explores validation evidence for a test of receptive L2 pragmatic ability called the American…
Descriptors: Pragmatics, Language Tests, Test Validity, Receptive Language
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Pujayanto, Pujayanto; Budiharti, Rini; Adhitama, Egy; Nuraini, Niken Rizky Amalia; Putri, Hanung Vernanda – Physics Education, 2018
This research proposes the development of a web-based assessment system to identify students' misconception. The system, named WAS (web-based assessment system), can identify students' misconception profile on linear kinematics automatically after the student has finished the test. The test instrument was developed and validated. Items were…
Descriptors: Misconceptions, Physics, Science Instruction, Databases
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Scherer, Ronny; Meßinger-Koppelt, Jenny; Tiemann, Rüdiger – International Journal of STEM Education, 2014
Background: Complex problem-solving competence is regarded as a key construct in science education. But due to the necessity of using interactive and intransparent assessment procedures, appropriate measures of the construct are rare. This paper consequently presents the development and validation of a computer-based problem-solving environment,…
Descriptors: Computer Assisted Testing, Problem Solving, Chemistry, Science Tests
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dermo, John; Boyne, James – Practitioner Research in Higher Education, 2014
We describe a study conducted during 2009-12 into innovative assessment practice, evaluating an assessed coursework task on a final year Medical Genetics module for Biomedical Science undergraduates. An authentic e-assessment coursework task was developed, integrating objectively marked online questions with an online DNA sequence analysis tool…
Descriptors: Biomedicine, Medical Education, Computer Assisted Testing, Courseware
Peer reviewed Peer reviewed
Direct linkDirect link
Gehsmann, Kristin; Spichtig, Alexandra; Tousley, Elias – Literacy Research: Theory, Method, and Practice, 2017
Assessments of developmental spelling, also called spelling inventories, are commonly used to understand students' orthographic knowledge (i.e., knowledge of how written words work) and to determine their stages of spelling and reading development. The information generated by these assessments is used to inform teachers' grouping practices and…
Descriptors: Spelling, Computer Assisted Testing, Grouping (Instructional Purposes), Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jansen, Renée S.; van Leeuwen, Anouschka; Janssen, Jeroen; Kester, Liesbeth; Kalz, Marco – Journal of Computing in Higher Education, 2017
The number of students engaged in Massive Open Online Courses (MOOCs) is increasing rapidly. Due to the autonomy of students in this type of education, students in MOOCs are required to regulate their learning to a greater extent than students in traditional, face-to-face education. However, there is no questionnaire available suited for this…
Descriptors: Online Courses, Independent Study, Questionnaires, Likert Scales
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Barabadi, Elyas; Khajavy, Gholam Hassan; Kamrood, Ali Mehri – International Journal of Instruction, 2018
The current study reports the results of a project aimed at assessing L2 listening comprehension by drawing on two approaches to dynamic assessment: interventionist and interactionist. The former approach was actualized by providing two graduated hints which were fixed and standardized for all test takers while the latter was actualized by asking…
Descriptors: Intervention, Interaction, Listening Comprehension, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Yan; Yan, Ming – Language Assessment Quarterly, 2017
One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…
Descriptors: Writing Tests, Computer Assisted Testing, Computer Literacy, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Liu, Ou Lydia – American Journal of Distance Education, 2017
Online higher education institutions are presented with the concern of how to obtain valid results when administering student learning outcomes (SLO) assessments remotely. Traditionally, there has been a great reliance on unproctored Internet test administration (UIT) due to increased flexibility and reduced costs; however, a number of validity…
Descriptors: Online Courses, Testing, Test Wiseness, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Kumar, K.; Roberts, C.; Bartle, E.; Eley, D. S. – Advances in Health Sciences Education, 2018
Written tests for selection into medicine have demonstrated reliability and there is accumulating evidence regarding their validity, but we know little about the broader impacts or consequences of medical school selection tests from the perspectives of key stakeholders. In this first Australian study of its kind, we use consequential validity as a…
Descriptors: Test Validity, Test Reliability, Foreign Countries, Online Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Shinhye; Winke, Paula – Language Testing, 2018
We investigated how young language learners process their responses on and perceive a computer-mediated, timed speaking test. Twenty 8-, 9-, and 10-year-old non-native English-speaking children (NNSs) and eight same-aged, native English-speaking children (NSs) completed seven computerized sample TOEFL® Primary™ speaking test tasks. We investigated…
Descriptors: Elementary School Students, Second Language Learning, Responses, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4