NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Florida1
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sarsa, Sami; Leinonen, Juho; Hellas, Arto – Journal of Educational Data Mining, 2022
New knowledge tracing models are continuously being proposed, even at a pace where state-of-the-art models cannot be compared with each other at the time of publication. This leads to a situation where ranking models is hard, and the underlying reasons of the models' performance -- be it architectural choices, hyperparameter tuning, performance…
Descriptors: Learning Processes, Artificial Intelligence, Intelligent Tutoring Systems, Memory
Punter, R. Annemiek; Glas, Cees A. W.; Meelissen, Martina R. M. – International Association for the Evaluation of Educational Achievement, 2016
Parental involvement is seen as one of the most malleable factors of the student's home situation, which makes it a relevant subject for schools, educational policies, and research. Though many studies have researched its role in student achievement, effects are not univocal. It is difficult to tell whether these inconsistent results are caused by…
Descriptors: Psychometrics, Parent Participation, Reading Skills, Literacy Education
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 50th percentile) on the reading comprehension…
Descriptors: Elementary School Students, Middle School Students, High School Students, Written Language
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bollmer, Julie; Cronin, Roberta; Brauen, Marsha; Howell, Bethany; Fletcher, Philip; Gonin, Rene; Jenkins, Frank – National Center for Special Education Research, 2010
The Study of Monitoring and Improvement Practices under the Individuals with Disabilities Education Act (IDEA) examined how states monitored the implementation of IDEA by local special education and early intervention services programs. State monitoring and improvement practices in 2004-05 and 2006-07 were the focus of the study. Prior to the…
Descriptors: Educational Needs, Early Intervention, Disabilities, Data Collection
Hendrickson, Amy B.; Kolen, Michael J. – 2001
This study compared various equating models and procedures for a sample of data from the Medical College Admission Test(MCAT), considering how item response theory (IRT) equating results compare with classical equipercentile results and how the results based on use of various IRT models, observed score versus true score, direct versus linked…
Descriptors: Equated Scores, Higher Education, Item Response Theory, Models
Cantrell, Catherine E. – 1997
This paper discusses the limitations of Classical Test Theory, the purpose of Item Response Theory/Latent Trait Measurement models, and the step-by-step calculations in the Rasch measurement model. The paper explains how Item Response Theory (IRT) transforms person abilities and item difficulties into the same metric for test-independent and…
Descriptors: Ability, Difficulty Level, Estimation (Mathematics), Item Response Theory
Wright, Benjamin D.; Stone, Mark H. – 1979
This handbook explains how to do Rasch measurement. The emphasis is on practice, but theoretical explanations are also provided. The Forward contains an introduction to the topic of Rasch measurement. Chapters 2, 4, 5, and 7 use a small problem to illustrate the application of Rasch measurement in detail, and methodological issues are considered…
Descriptors: Item Response Theory, Mathematical Models, Measurement Techniques, Psychometrics
PDF pending restoration PDF pending restoration
Thompson, Bruce; Melancon, Janet G. – 1995
This study was conducted to evaluate whether a brief self-description checklist may provide a viable method of quickly obtaining initial personality type information. The Personal Preferences Self-Description Questionnaire (PPSDQ) and the Myers-Briggs Type Indicator (MBTI) were administered to 420 college students, and PPSDQ item-response and MBTI…
Descriptors: Check Lists, College Students, Evaluation Methods, Goodness of Fit
Rasch, Georg – 1993
The psychometric research done by G. Rasch between 1951 and 1959, which is explained and illustrated in this book, takes psychometrics from being purely descriptive to being a science of objective measurement. Individual centered statistics require models in which each individual is characterized separately and from which, given adequate data,…
Descriptors: Achievement Tests, Estimation (Mathematics), Intelligence Tests, Item Response Theory
Ingebo, George S. – 1997
This book shows the advantages of Rasch measurement (G. Rasch) for school district testing programs. The results of Rasch methods are contrasted with conventional statistics for assessing student responses to basic skills testing. Chapter 1 shows how the Rasch probability-based method produces measures that are more useful for students, parents,…
Descriptors: Academic Achievement, Achievement Tests, Elementary Secondary Education, Item Banks
Engelhard, George, Jr.; Myford, Carol M. – College Board, 2003
The purpose of this study was to examine, describe, evaluate, and compare the rating behavior of faculty consultants who scored essays written for the Advanced Placement English Literature and Composition (AP® ELC) Exam. Data from the 1999 AP ELC Exam were analyzed using FACETS (Linacre, 1998) and SAS. The faculty consultants were not all…
Descriptors: Advanced Placement, College Faculty, Consultants, Scoring