NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 73 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Christian X. Navarro-Cota; Ana I. Molina; Miguel A. Redondo; Carmen Lacave – IEEE Transactions on Education, 2024
Contribution: This article describes the process used to create a questionnaire to evaluate the usability of mobile learning applications (CECAM). The questionnaire includes specific questions to assess user interface usability and pedagogical usability. Background: Nowadays, mobile applications are expanding rapidly and are commonly used in…
Descriptors: Usability, Questionnaires, Electronic Learning, Computer Oriented Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Zhang; Paul Bailey; Yuqi Liao; Emmanuel Sikali – Large-scale Assessments in Education, 2024
The EdSurvey package helps users download, explore variables in, extract data from, and run analyses on large-scale assessment data. The analysis functions in EdSurvey account for the use of plausible values for test scores, survey sampling weights, and their associated variance estimator. We describe the capabilities of the package in the context…
Descriptors: National Competency Tests, Information Retrieval, Data Collection, Test Validity
Denise Swanson; Gerald Tindal – Behavioral Research and Teaching, 2024
This technical report provides an authoritative bibliographic resource of all the studies conducted on "easyCBM"® and published on the main website for Behavioral Research and Teaching under Publications (https://brtprojects.org). The "easyCBM"© software is a direct descendent of "Curriculum-based Measurement" (CBM)…
Descriptors: Bibliographies, Computer Software, Test Construction, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Sonique Sailsman; Emma El-Shami – Quarterly Review of Distance Education, 2024
Nurse educators at the undergraduate level spend significant time developing and revising exam questions. Following the exam administration, course faculty have the opportunity to complete an item analysis and question revision to improve reliability and validity. A challenge faculty face is tracking these exam changes when teaching as part of a…
Descriptors: Nursing Education, Nursing Students, College Faculty, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Gloria; Bonner, Cavan V.; Tay, Louis – International Journal of Testing, 2022
With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are…
Descriptors: Psychometrics, Computer Assisted Testing, Adaptive Testing, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Carol L.; Jones, Andrew T.; Ibáñez, Beatriz; Grambau, Marni; Buyske, Jo – Educational Measurement: Issues and Practice, 2022
In response to the COVID-19 pandemic, the American Board of Surgery (ABS) shifted from in-person to remote administrations of the oral certifying exam (CE). Although the overall exam architecture remains the same, there are a number of differences in administration and staffing costs, exam content, security concerns, and the tools used to give the…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Verbal Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ying Xu; Xiaodong Li; Jin Chen – Language Testing, 2025
This article provides a detailed review of the Computer-based English Listening Speaking Test (CELST) used in Guangdong, China, as part of the National Matriculation English Test (NMET) to assess students' English proficiency. The CELST measures listening and speaking skills as outlined in the "English Curriculum for Senior Middle…
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bronwen Cowie – New Zealand Journal of Educational Studies, 2024
Assessment makes visible what we value and reciprocally what is assessed tends to become what is taken to be of value. Building on this, it can be argued that assessment does more than measure what is present rather it 'makes up' people. This piece offers a reflective commentary some of the insights to be gained from the Research Analysis and…
Descriptors: National Standards, Student Attitudes, Teacher Attitudes, Parent Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Lenz, A. Stephen; Ault, Haley; Balkin, Richard S.; Barrio Minton, Casey; Erford, Bradley T.; Hays, Danica G.; Kim, Bryan S. K.; Li, Chi – Measurement and Evaluation in Counseling and Development, 2022
In April 2021, The Association for Assessment and Research in Counseling Executive Council commissioned a time-referenced task group to revise the Responsibilities of Users of Standardized Tests (RUST) Statement (3rd edition) published by the Association for Assessment in Counseling (AAC) in 2003. The task group developed a work plan to implement…
Descriptors: Responsibility, Standardized Tests, Counselor Training, Ethics
Maddox, Bryan – OECD Publishing, 2023
The digital transition in educational testing has introduced many new opportunities for technology to enhance large-scale assessments. These include the potential to collect and use log data on test-taker response processes routinely, and on a large scale. Process data has long been recognised as a valuable source of validation evidence in…
Descriptors: Measurement, Inferences, Test Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Education Inquiry, 2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of…
Descriptors: Computer Assisted Testing, Cheating, Test Wiseness, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5