NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 727 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Marta Siedlecka; Piotr Litwin; Paulina Szyszka; Boryslaw Paulewicz – European Journal of Psychology of Education, 2025
Students change their responses during tests, and these revisions are often correct. Some studies have suggested that decisions regarding revisions are informed by metacognitive monitoring. We investigated whether assessing and reporting response confidence increases the accuracy of revisions and the final test score, and whether confidence in a…
Descriptors: Student Evaluation, Decision Making, Responses, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Liu; Zhen Li; Hongyun Liu; Xiaofeng You – Applied Measurement in Education, 2024
Low test-taking effort of examinees has been considered a source of construct-irrelevant variance in item response modeling, leading to serious consequences on parameter estimation. This study aims to investigate how non-effortful response (NER) influences the estimation of item and person parameters in item-pool scale linking (IPSL) and whether…
Descriptors: Item Response Theory, Computation, Simulation, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Kofi Nkonkonya Mpuangnan – Review of Education, 2024
Assessment practices play a crucial role in fostering student learning and guiding instructional decision-making. The ability to construct effective test items is of utmost importance in evaluating student learning and shaping instructional strategies. This study aims to investigate the skills of Ghanaian basic schoolteachers in test item…
Descriptors: Test Items, Test Construction, Student Evaluation, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yunting Liu; Shreya Bhandari; Zachary A. Pardos – British Journal of Educational Technology, 2025
Effective educational measurement relies heavily on the curation of well-designed item pools. However, item calibration is time consuming and costly, requiring a sufficient number of respondents to estimate the psychometric properties of items. In this study, we explore the potential of six different large language models (LLMs; GPT-3.5, GPT-4,…
Descriptors: Artificial Intelligence, Test Items, Psychometrics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Arif Cem Topuz; Kinshuk – Educational Technology Research and Development, 2024
Online assessments of learning, or online exams, have become increasingly widespread with the rise of distance learning. Online exams are preferred by many students and are perceived as a quick and easy tool to measure knowledge. On the contrary, some students are concerned about the possibility of cheating and technological difficulties in online…
Descriptors: Computer Assisted Testing, Student Evaluation, Evaluation Methods, Student Attitudes
Mingjia Ma – ProQuest LLC, 2023
Response time is an important research topic in the field of psychometrics. This dissertation tries to explore some response time properties across several item characteristics and examinee characteristics, as well as the interactions between response time and response outcomes, using data from a statewide mathematics assessment in two grades.…
Descriptors: Reaction Time, Mathematics Tests, Standardized Tests, State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Michelle Cheong – Journal of Computer Assisted Learning, 2025
Background: Increasingly, students are using ChatGPT to assist them in learning and even completing their assessments, raising concerns of academic integrity and loss of critical thinking skills. Many articles suggested educators redesign assessments that are more 'Generative-AI-resistant' and to focus on assessing students on higher order…
Descriptors: Artificial Intelligence, Performance Based Assessment, Spreadsheets, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Sharma, Harsh; Mathur, Rohan; Chintala, Tejas; Dhanalakshmi, Samiappan; Senthil, Ramalingam – Education and Information Technologies, 2023
Examination assessments undertaken by educational institutions are pivotal since it is one of the fundamental steps to determining students' understanding and achievements for a distinct subject or course. Questions must be framed on the topics to meet the learning objectives and assess the student's capability in a particular subject. The…
Descriptors: Taxonomy, Student Evaluation, Test Items, Questioning Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erlina Fatkur Rohmah; Sukarmin; Daru Wahyuningsih – Pegem Journal of Education and Instruction, 2024
The study aimed to analyze the content validation of the STEM-integrated on thermal and transport concept inventory instrument used to measure the problem-solving abilities of high school students. The instrument questions developed amounted to nine description questions. This type of study is development research. The steps in this research are…
Descriptors: Content Validity, Measures (Individuals), Concept Formation, STEM Education
Peer reviewed Peer reviewed
Direct linkDirect link
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Pearson, Christopher; Penna, Nigel – Assessment & Evaluation in Higher Education, 2023
E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering…
Descriptors: Automation, Scoring, Engineering Education, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Bingxue Zhang; Yang Shi; Yuxing Li; Chengliang Chai; Longfeng Hou – Interactive Learning Environments, 2023
The adaptive learning environment provides learning support that suits individual characteristics of students, and the student model of the adaptive learning environment is the key element to promote individualized learning. This paper provides a systematic overview of the existing student models, consequently showing that the Elo rating system…
Descriptors: Electronic Learning, Models, Students, Individualized Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Maristela Petrovic-Dzerdz – Collected Essays on Learning and Teaching, 2024
Large introductory classes, with their expansive curriculum, demand assessment strategies that blend efficiency with reliability, prompting the consideration of multiple-choice (MC) tests as a viable option. Crafting a high-quality MC test, however, necessitates a meticulous process involving reflection on assessment format appropriateness, test…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Alignment (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Cuhadar, Ismail; Binici, Salih – Educational Measurement: Issues and Practice, 2022
This study employs the 4-parameter logistic item response theory model to account for the unexpected incorrect responses or slipping effects observed in a large-scale Algebra 1 End-of-Course assessment, including several innovative item formats. It investigates whether modeling the misfit at the upper asymptote has any practical impact on the…
Descriptors: Item Response Theory, Measurement, Student Evaluation, Algebra
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  49