NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
Individuals with Disabilities…1
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Yun Deok – Language Testing in Asia, 2022
A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on…
Descriptors: Test Validity, Scores, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Yaru Meng; Hua Fu; Chuang Wang – Language Learning & Technology, 2024
There is growing literature on computerized dynamic assessment (C-DA) wherein individual items are accompanied by mediating prompts, but its effectiveness at fine-grained levels across time has not been explored sufficiently. This study constructed a computerized listening dynamic assessment (CLDA) system, where mediation was informed by an…
Descriptors: Computer Assisted Testing, Test Validity, Audio Equipment, Audiometric Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Anne-Mai Meesak; Dmitri Rozgonjuk; Tiia Õun; Eve Kikas – Education 3-13, 2024
Children's development during early childhood affects their well-being and educational success, but there are few reliable assessment instruments available. The aim of the study was to develop, pilot and validate an e-assessment instrument for assessing five-year-old children's development in cognitive processes, learning, language and…
Descriptors: Test Validity, Computer Assisted Testing, Measures (Individuals), Child Development
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Heng-Tsung Danny; Hung, Shao-Ting Alan; Chao, Hsiu-Yi; Chen, Jyun-Hong; Lin, Tsui-Peng; Shih, Ching-Lin – Language Assessment Quarterly, 2022
Prompted by Taiwanese university students' increasing demand for English proficiency assessment, the absence of a test designed specifically for this demographic subgroup, and the lack of a localized and freely-accessible proficiency measure, this project set out to develop and validate a computerized adaptive English proficiency testing (E-CAT)…
Descriptors: Computer Assisted Testing, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Fairbairn, Judith; Spiby, Richard – European Journal of Special Needs Education, 2019
Language test developers have a responsibility to ensure that their tests are accessible to test takers of various backgrounds and characteristics and also that they have the opportunity to perform to the best of their ability. This principle is widely recognised by educational and language testing associations in guidelines for the production and…
Descriptors: Testing, Language Tests, Test Construction, Testing Accommodations
Daniel Rodriguez-Segura; Beth E. Schueler – Annenberg Institute for School Reform at Brown University, 2022
School closures induced by COVID-19 placed heightened emphasis on alternative ways to measure student learning besides in-person exams. We leverage the administration of phone-based assessments (PBAs) measuring numeracy and literacy for primary school children in Kenya, along with in-person standardized tests administered to the same students…
Descriptors: Foreign Countries, School Closing, COVID-19, Pandemics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dembitzer, Leah; Zelikovitz, Sarah; Kettler, Ryan J. – International Journal of Educational Technology, 2017
A partnership was created between psychologists and computer programmers to develop a computer-based assessment program. Psychometric concerns of accessibility, reliability, and validity were juxtaposed with core development concepts of usability and user-centric design. Phases of development were iterative, with evaluation phases alternating with…
Descriptors: Computer Assisted Testing, Test Reliability, Test Validity, Usability
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming – Language Assessment Quarterly, 2017
To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papageorgiou, Spiros; Wu, Sha; Hsieh, Ching-Ni; Tannenbaum, Richard J.; Cheng, Mengmeng – ETS Research Report Series, 2019
The past decade has seen an emerging interest in mapping (aligning or linking) test scores to language proficiency levels of external performance scales or frameworks, such as the Common European Framework of Reference (CEFR), as well as locally developed frameworks, such as China's Standards of English Language Ability (CSE). Such alignment is…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jansen, Renée S.; van Leeuwen, Anouschka; Janssen, Jeroen; Kester, Liesbeth; Kalz, Marco – Journal of Computing in Higher Education, 2017
The number of students engaged in Massive Open Online Courses (MOOCs) is increasing rapidly. Due to the autonomy of students in this type of education, students in MOOCs are required to regulate their learning to a greater extent than students in traditional, face-to-face education. However, there is no questionnaire available suited for this…
Descriptors: Online Courses, Independent Study, Questionnaires, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Mix, Daniel F.; Tao, Shuqin – AERA Online Paper Repository, 2017
Purposes: This study uses think-alouds and cognitive interviews to provide validity evidence for an online formative assessment--i-Ready Standards Mastery (iSM) mini-assessments--which involves a heavy use of innovative items. iSM mini-assessments are intended to help teachers determine student understanding of each of the on-grade-level Common…
Descriptors: Formative Evaluation, Computer Assisted Testing, Test Validity, Student Evaluation
Sinclair, Andrea; Deatz, Richard; Johnston-Fisher, Jessica; Levinson, Heather; Thacker, Arthur – Partnership for Assessment of Readiness for College and Careers, 2015
The overall purpose of the research studies described in this report was to investigate the quality of the administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) field test during the spring of 2014. These research studies were conducted for the purpose of formative evaluation. Findings from these studies are…
Descriptors: Standardized Tests, College Readiness, Career Readiness, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Murray, Keith B.; Zdravkovic, Srdan – Journal of Education for Business, 2016
Considerable debate continues regarding the efficacy of the website RateMyProfessors.com (RMP). To date, however, virtually no direct, experimental research has been reported which directly bears on questions relating to sampling adequacy or item adequacy in producing what favorable correlations have been reported. The authors compare the data…
Descriptors: Computer Assisted Testing, Computer Software Evaluation, Student Evaluation of Teacher Performance, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Razi, Salim – SAGE Open, 2015
Similarity reports of plagiarism detectors should be approached with caution as they may not be sufficient to support allegations of plagiarism. This study developed a 50-item rubric to simplify and standardize evaluation of academic papers. In the spring semester of 2011-2012 academic year, 161 freshmen's papers at the English Language Teaching…
Descriptors: Foreign Countries, Scoring Rubrics, Writing Evaluation, Writing (Composition)
Previous Page | Next Page »
Pages: 1  |  2  |  3