NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Levin, Nathan A. – Journal of Educational Data Mining, 2021
The Big Data for Education Spoke of the NSF Northeast Big Data Innovation Hub and ETS co-sponsored an educational data mining competition in which contestants were asked to predict efficient time use on the NAEP 8th grade mathematics computer-based assessment, based on the log file of a student's actions on a prior portion of the assessment. In…
Descriptors: Learning Analytics, Data Collection, Competition, Prediction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gezer, Tuba; Wang, Chuang; Polly, Andrew; Martin, Christie; Pugalee, David; Lambert, Richard – International Electronic Journal of Elementary Education, 2021
This study used hierarchical linear modeling to examine the relationship between an internet-based mathematics formative assessment and data from a mathematics summative assessment for primary grade learners (ages 5-7). Results showed a positive relationship between formative assessment data related to the concepts of counting and decomposing…
Descriptors: Formative Evaluation, Summative Evaluation, Elementary School Mathematics, Mathematics Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Relkin, Emily; de Ruiter, Laura; Bers, Marina Umaschi – Journal of Science Education and Technology, 2020
There is a need for developmentally appropriate Computational Thinking (CT) assessments that can be implemented in early childhood classrooms. We developed a new instrument called "TechCheck" for assessing CT skills in young children that does not require prior knowledge of computer programming. "TechCheck" is based on…
Descriptors: Developmentally Appropriate Practices, Computation, Thinking Skills, Early Childhood Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sangwin, Christopher J.; Jones, Ian – Educational Studies in Mathematics, 2017
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Descriptors: Mathematics Achievement, Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hula, William D.; Kellough, Stacey; Fergadiotis, Gerasimos – Journal of Speech, Language, and Hearing Research, 2015
Purpose: The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015),…
Descriptors: Computer Assisted Testing, Adaptive Testing, Naming, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Nelson, Peter M.; Parker, David C.; Zaslofsky, Anne F. – Assessment for Effective Intervention, 2016
The purpose of the current study was to evaluate the importance of growth in math fact skills within the context of overall math proficiency. Data for 1,493 elementary and middle school students were included for analysis. Regression models were fit to examine the relative value of math fact fluency growth, prior state test performance, and a fall…
Descriptors: Mathematics, Mathematics Instruction, Mathematics Skills, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Watchorn, Rebecca P. D.; Bisanz, Jeffrey; Fast, Lisa; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Smith-Chant, Brenda L. – Journal of Cognition and Development, 2014
The principle of "inversion," that a + b - b "must" equal a, is a fundamental property of arithmetic, but many children fail to apply it in symbolic contexts through 10 years of age. We explore three hypotheses relating to the use of inversion that stem from a model proposed by Siegler and Araya (2005). Hypothesis 1 is that…
Descriptors: Mathematics Skills, Skill Development, Computation, Attention Control
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rock, Donald A. – ETS Research Report Series, 2007
This paper presents a strategy for measuring cognitive gains in reading during the early school years. It is argued that accurate estimates of gain scores and their appropriate interpretation requires the use of adaptive tests with multiple criterion referenced points that mark learning milestones. It is further argued that two different measures…
Descriptors: Scores, Cognitive Development, Computation, Test Interpretation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – ETS Research Report Series, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Test Items, Computer Assisted Testing, Computation, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard – ETS Research Report Series, 2005
SCORIGHT is a very general computer program for scoring tests. It models tests that are made up of dichotomously or polytomously rated items or any kind of combination of the two through the use of a generalized item response theory (IRT) formulation. The items can be presented independently or grouped into clumps of allied items (testlets) or in…
Descriptors: Computer Assisted Testing, Statistical Analysis, Test Items, Bayesian Statistics
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect