NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
Massachusetts Comprehensive…1
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A.; van Rijn, Peter W. – Measurement: Interdisciplinary Research and Perspectives, 2018
We review the GIDNA and CDM packages in R for fitting cognitive diagnosis/diagnostic classification models. We first provide a summary of their core capabilities and then use both simulated and real data to compare their functionalities in practice. We found that the most relevant routines in the two packages appear to be more similar than…
Descriptors: Educational Assessment, Cognitive Measurement, Measurement, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Ishii, Takatoshi; Songmuang, Pokpong; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2014
Educational assessments occasionally require uniform test forms for which each test form comprises a different set of items, but the forms meet equivalent test specifications (i.e., qualities indicated by test information functions based on item response theory). We propose two maximum clique algorithms (MCA) for uniform test form assembly. The…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Gattamorta, Karina A.; Penfield, Randall D.; Myers, Nicholas D. – International Journal of Testing, 2012
Measurement invariance is a common consideration in the evaluation of the validity and fairness of test scores when the tested population contains distinct groups of examinees, such as examinees receiving different forms of a translated test. Measurement invariance in polytomous items has traditionally been evaluated at the item-level,…
Descriptors: Foreign Countries, Psychometrics, Test Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Cronan, Timothy Paul; Leger, Pierre-Majorique; Robert, Jacques; Babin, Gilbert; Charland, Patrick – Simulation & Gaming, 2012
Enterprise Resource Planning (ERP) systems have had a significant impact on business organizations. These large systems offer opportunities for companies regarding the integration and functionality of information technology systems; in effect, companies can realize a competitive advantage that is necessary in today's global companies. However,…
Descriptors: Simulation, Information Technology, Educational Assessment, Management Information Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Hickey, Daniel T.; Zuiker, Steven J. – Journal of the Learning Sciences, 2012
Evaluating the impact of instructional innovations and coordinating instruction, assessment, and testing present complex tensions. Many evaluation and coordination efforts aim to address these tensions by using the coherence provided by modern cognitive science perspectives on domain-specific learning. This paper introduces an alternative…
Descriptors: Program Effectiveness, Achievement Tests, Performance Based Assessment, Genetics
Peer reviewed Peer reviewed
Direct linkDirect link
Gutierrez, Eladio; Trenas, Maria A.; Ramos, Julian; Corbera, Francisco; Romero, Sergio – Computers & Education, 2010
This work describes a new "Moodle" module developed to give support to the practical content of a basic computer organization course. This module goes beyond the mere hosting of resources and assignments. It makes use of an automatic checking and verification engine that works on the VHDL designs submitted by the students. The module automatically…
Descriptors: Assignments, Teamwork, Units of Study, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2007
Reporting methods used in large-scale assessments such as the National Assessment of Educational Progress (NAEP) rely on latent regression models. To fit the latent regression model using the maximum likelihood estimation technique, multivariate integrals must be evaluated. In the computer program MGROUP used by the Educational Testing Service for…
Descriptors: Simulation, Computer Software, Sampling, Data Analysis
Behrens, John T.; Mislevy, Robert J.; DiCerbo, Kristen E.; Levy, Roy – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2010
The world in which learning and assessment must take place is rapidly changing. The digital revolution has created a vast space of interconnected information, communication, and interaction. Functioning effectively in this environment requires so-called 21st century skills such as technological fluency, complex problem solving, and the ability to…
Descriptors: Evidence, Student Evaluation, Educational Assessment, Influence of Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Ketelhut, Diane Jass; Nelson, Brian C.; Clarke, Jody; Dede, Chris – British Journal of Educational Technology, 2010
This study investigated novel pedagogies for helping teachers infuse inquiry into a standards-based science curriculum. Using a multi-user virtual environment (MUVE) as a pedagogical vehicle, teams of middle-school students collaboratively solved problems around disease in a virtual town called River City. The students interacted with "avatars" of…
Descriptors: Urban Areas, Virtual Classrooms, Science Education, Science Curriculum
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Kavanaugh, Maureen; Masters, Jessica; Higgins, Jennifer; Hoffmann, Thomas – Journal of Applied Testing Technology, 2009
Many students who are deaf or hard-of-hearing are eligible for a signing accommodation for state and other standardized tests. The signing accommodation, however, presents several challenges for testing programs that attempt to administer tests under standardized conditions. One potential solution for many of these challenges is the use of…
Descriptors: Testing Programs, Student Attitudes, Standardized Tests, Academic Achievement
Chung, Gregory K. W. K.; Herl, Howard E.; Klein, Davina C. D.; O'Neil, Harold F., Jr.; Schacter, John – 1997
This report examines issues in the scale-up of assessment software from the Center for Research on Evaluation, Standards, and Student Testing (CRESST). "Scale-up" is used in a metaphorical sense, meaning adding new assessment tools to CRESST's assessment software. During the past several years, CRESST has been developing and evaluating a…
Descriptors: Computer Assisted Testing, Computer Software, Concept Mapping, Educational Assessment
Herman, Joan L. – 1987
Several studies were conducted in 1987 by the Multilevel Evaluation Systems Project, which focuses on developing a model for a multi-purpose, multi-user evaluation system to facilitate educational decision making and evaluation. The project model emphasizes on-going integrated assessment of individuals, classes, and programs using a variety of…
Descriptors: Computer Simulation, Computer Software, Database Management Systems, Databases
Roberts, David C. – 1993
The differences between multiple-choice, simulated, and concurrent tests of software-skills proficiency are discussed. For three basic human-resource functions, the advantages of concurrent tests (i.e., those that use the actual application software) include true performance-based assessment, unconstrained response alternatives, and increased job…
Descriptors: Competence, Computer Literacy, Computer Oriented Programs, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Zapata-Rivera, Diego; VanWinkle, Waverely; Doyle, Bryan; Buteux, Alyssa; Bauer, Malcolm – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to propose and demonstrate an evidence-based scenario design framework for assessment-based computer games. Design/methodology/approach: The evidence-based scenario design framework is presented and demonstrated by using BELLA, a new assessment-based gaming environment aimed at supporting student learning of…
Descriptors: Feedback (Response), Urban Schools, Measurement, Psychometrics
Previous Page | Next Page »
Pages: 1  |  2