NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 104 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Holly Robson; Harriet Thomasson; Matthew H. Davis – International Journal of Language & Communication Disorders, 2024
Background: The use of telepractice in aphasia research and therapy is increasing in frequency. Teleassessment in aphasia has been demonstrated to be reliable. However, neuropsychological and clinical language comprehension assessments are not always readily translatable to an online environment and people with severe language comprehension or…
Descriptors: Aphasia, Severity (of Disability), Videoconferencing, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmed Abdel-Al Ibrahim, Khaled; Karimi, Ali Reza; Abdelrasheed, Nasser Said Gomaa; Shatalebi, Vida – Language Testing in Asia, 2023
Dynamic assessment is heavily based on Vygotskian socio-cultural theory and in recent years researchers have shown interest in the theory as a way to facilitate learning. This study attempted to examine the comparative effect of group dynamic assessment (GDA) and computerized dynamic assessment (CDA) on listening development, L2 learners'…
Descriptors: Evaluation Methods, Computer Assisted Testing, Second Language Learning, Listening Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Jian-Wei; Tsai, Chia-Wen; Hsu, Chu-Ching – Interactive Learning Environments, 2023
Different e-learning technologies may offer different incentive factors, which influence behavioural intention. Moreover, when adopting a new e-learning technology for an extended period, learners' perceptions and learning behaviour may change during the learning period. Unfortunately, as formative assessments (FAs) are often continuously…
Descriptors: Comparative Analysis, Evaluation Methods, Formative Evaluation, Game Based Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Lemmo, Alice – International Journal of Science and Mathematics Education, 2021
Comparative studies on paper and pencil--and computer-based tests principally focus on statistical analysis of students' performances. In educational assessment, comparing students' performance (in terms of right or wrong results) does not imply a comparison of problem-solving processes followed by students. In this paper, we present a theoretical…
Descriptors: Computer Assisted Testing, Comparative Analysis, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Zhijun; Xu, Peng; Wang, Jianqin – Language Assessment Quarterly, 2023
The construct of "learning potential" has been proposed to capture differences between learner independent performance and performance during Dynamic Assessment (DA). This paper introduces a new LPS formula implemented in a DA study involving Pakistani learners of L2 Chinese. Learners were randomly assigned to a control or experimental…
Descriptors: Chinese, Second Language Learning, Second Language Instruction, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Renner, Elizabeth; Somai, Rosyl S.; Van der Stigchel, Stefan; Campbell, Clare; Kean, Donna; Caldwell, Christine A. – Infant and Child Development, 2021
Assessing children's working memory capacity (WMC) can be challenging for a variety of reasons, including the rapid increase in WMC across early childhood. Here, we developed and piloted an adapted WMC task, which involved minimal equipment, could be performed rapidly, and did not rely on verbal production ability (to facilitate the use of the…
Descriptors: Task Analysis, Short Term Memory, Child Development, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Rassaei, Ehsan – Language Learning Journal, 2023
The present study investigated the effects of text-based and audio-based dynamic glosses on L2 vocabulary learning within a sociocultural approach. Dynamic glosses were operationalised as a set of incrementally ordered prompts, provided during text-based and audio-based interactions that guided the participants to identify the meaning of unknown…
Descriptors: Vocabulary Development, Second Language Learning, Second Language Instruction, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Hassler, Kendyl; Pearce, Kelly J.; Serfass, Thomas L. – International Journal of Social Research Methodology, 2018
This study compares cost, completion times, and percent completion of electronic tablet (n = 244) to paper-based (n = 398) questionnaires administered to participants of scenic raft trips on the Snake River, Grand Teton National Park. We hypothesized e-tablet questionnaires would (1) cost less (2) be completed faster and (3) be completely filled…
Descriptors: Surveys, Costs, Time, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao; Xiao, Xiaoyan – Language Testing, 2022
The quality of sign language interpreting (SLI) is a gripping construct among practitioners, educators and researchers, calling for reliable and valid assessment. There has been a diverse array of methods in the extant literature to measure SLI quality, ranging from traditional error analysis to recent rubric scoring. In this study, we want to…
Descriptors: Comparative Analysis, Sign Language, Deaf Interpreting, Evaluators
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2018
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Descriptors: Evaluation Methods, Tests, Computer Assisted Testing, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Linlin, Cao – English Language Teaching, 2020
Through Many-Facet Rasch analysis, this study explores the rating differences between 1 computer automatic rater and 5 expert teacher raters on scoring 119 students in a computerized English listening-speaking test. Results indicate that both automatic and the teacher raters demonstrate good inter-rater reliability, though the automatic rater…
Descriptors: Language Tests, Computer Assisted Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Faniran, Victor Temitayo; Ajayi, Nurudeen A. – Africa Education Review, 2018
Assessments are important to academic institutions because they help in evaluating students' knowledge. The conduct of assessments nowadays has been influenced by the continuous evolution of information technology. Hence, academic institutions now use computers for assessments, often known as Computer-Based Assessments (CBAs), in tandem with…
Descriptors: Foreign Countries, College Students, Student Attitudes, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7