NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Michelle Cheong – Journal of Computer Assisted Learning, 2025
Background: Increasingly, students are using ChatGPT to assist them in learning and even completing their assessments, raising concerns of academic integrity and loss of critical thinking skills. Many articles suggested educators redesign assessments that are more 'Generative-AI-resistant' and to focus on assessing students on higher order…
Descriptors: Artificial Intelligence, Performance Based Assessment, Spreadsheets, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bingxue Zhang; Yang Shi; Yuxing Li; Chengliang Chai; Longfeng Hou – Interactive Learning Environments, 2023
The adaptive learning environment provides learning support that suits individual characteristics of students, and the student model of the adaptive learning environment is the key element to promote individualized learning. This paper provides a systematic overview of the existing student models, consequently showing that the Elo rating system…
Descriptors: Electronic Learning, Models, Students, Individualized Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Gombert, Sebastian; Di Mitri, Daniele; Karademir, Onur; Kubsch, Marcus; Kolbe, Hannah; Tautz, Simon; Grimm, Adrian; Bohm, Isabell; Neumann, Knut; Drachsler, Hendrik – Journal of Computer Assisted Learning, 2023
Background: Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free-text responses are well suited for testing their active knowledge. However, assessing such constructed responses in an automated fashion is a complex task…
Descriptors: Coding, Energy, Scientific Concepts, Formative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tim Jacobbe; Bob delMas; Brad Hartlaub; Jeff Haberstroh; Catherine Case; Steven Foti; Douglas Whitaker – Numeracy, 2023
The development of assessments as part of the funded LOCUS project is described. The assessments measure students' conceptual understanding of statistics as outlined in the GAISE PreK-12 Framework. Results are reported from a large-scale administration to 3,430 students in grades 6 through 12 in the United States. Items were designed to assess…
Descriptors: Statistics Education, Common Core State Standards, Student Evaluation, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Langbeheim, Elon; Ben-Eliyahu, Einat; Adadan, Emine; Akaygun, Sevil; Ramnarain, Umesh Dewnarain – Chemistry Education Research and Practice, 2022
Learning progressions (LPs) are novel models for the development of assessments in science education, that often use a scale to categorize students' levels of reasoning. Pictorial representations are important in chemistry teaching and learning, and also in LPs, but the differences between pictorial and verbal items in chemistry LPs is unclear. In…
Descriptors: Science Instruction, Learning Trajectories, Chemistry, Thinking Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Albacete, Patricia; Silliman, Scott; Jordan, Pamela – Grantee Submission, 2017
Intelligent tutoring systems (ITS), like human tutors, try to adapt to student's knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student's initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Multiple Choice Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver – Large-scale Assessments in Education, 2017
Background: A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to…
Descriptors: Student Evaluation, International Assessment, Adults, Competence
Thummaphan, Phonraphee – ProQuest LLC, 2017
The present study aimed to represent the innovative assessments that support students' learning in STEM education through using the integrative framework for Cognitive Diagnostic Modeling (CDM). This framework is based on three components, cognition, observation, and interpretation (National Research Council, 2001). Specifically, this dissertation…
Descriptors: STEM Education, Cognitive Processes, Observation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Bramley, Tom – Cambridge Assessment, 2014
The aim of this study was to compare models of assessment structure for achieving differentiation between examinees of different levels of attainment in the GCSE in England. GCSEs are high-stakes curriculum-based public examinations taken by 16 year olds at the end of compulsory schooling. The context for the work was an intense period of debate…
Descriptors: Foreign Countries, Exit Examinations, Alternative Assessment, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ong, Yoke Mooi; Williams, Julian; Lamprianou, Iasonas – International Journal of Testing, 2015
The purpose of this article is to explore crossing differential item functioning (DIF) in a test drawn from a national examination of mathematics for 11-year-old pupils in England. An empirical dataset was analyzed to explore DIF by gender in a mathematics assessment. A two-step process involving the logistic regression (LR) procedure for…
Descriptors: Mathematics Tests, Gender Differences, Test Bias, Test Items
College Board, 2012
Looking beyond the right or wrong answer is imperative to the development of effective educational environments conducive to Pre-AP work in math. This presentation explores a system of evaluation in math that provides a personalized, student-reflective model correlated to consortia-based assessment. Using examples of students' work that includes…
Descriptors: Student Evaluation, Mathematics Instruction, Correlation, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh – English Language Teaching, 2013
Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Evaluation Methods
Diakow, Ronli Phyllis – ProQuest LLC, 2013
This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…
Descriptors: Inferences, Educational Assessment, Academic Achievement, Educational Research
Previous Page | Next Page »
Pages: 1  |  2  |  3