NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 124 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matayoshi, Jeffrey; Cosyn, Eric; Uzun, Hasan – International Journal of Artificial Intelligence in Education, 2021
Many recent studies have looked at the viability of applying recurrent neural networks (RNNs) to educational data. In most cases, this is done by comparing their performance to existing models in the artificial intelligence in education (AIED) and educational data mining (EDM) fields. While there is increasing evidence that, in many situations,…
Descriptors: Artificial Intelligence, Data Analysis, Student Evaluation, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – International Educational Data Mining Society, 2019
Short forms of psychometric scales have been commonly used in educational and psychological research to reduce the burden of test administration. However, it is challenging to select items for a short form that preserve the validity and reliability of the scores of the original scale. This paper presents and evaluates multiple automated methods…
Descriptors: Psychometrics, Measures (Individuals), Mathematics, Heuristics
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Daniel R.; Kremmel, Benjamin – Language Testing, 2020
Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken…
Descriptors: Language Tests, High Stakes Tests, Language Proficiency, Second Language Learning
Li, Dan; Benton, Stephen L. – IDEA Center, Inc., 2017
In the study evaluated in this report, the authors asked what effect survey length has on student non-response rates to individual items on IDEA's "Diagnostic Feedback" (DF) and "Learning Essentials" (LE) forms. The approach was to analyze individual student ratings of classes contained in the 2015-2016 IDEA-CL database.…
Descriptors: Response Rates (Questionnaires), Student Surveys, Test Length, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Norris, John; Drackert, Anastasia – Language Testing, 2018
The Test of German as a Foreign Language (TestDaF) plays a critical role as a standardized test of German language proficiency. Developed and administered by the Society for Academic Study Preparation and Test Development (g.a.s.t.), TestDaF was launched in 2001 and has experienced persistent annual growth, with more than 44,000 test takers in…
Descriptors: German, Second Language Learning, Language Tests, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vaheoja, Monika; Verhelst, N. D.; Eggen, T.J.H.M. – European Journal of Science and Mathematics Education, 2019
In this article, the authors applied profile analysis to Maths exam data to demonstrate how different exam forms, differing in difficulty and length, can be reported and easily interpreted. The results were presented for different groups of participants and for different institutions in different Maths domains by evaluating the balance. Some…
Descriptors: Feedback (Response), Foreign Countries, Statistical Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
James, Syretta R.; Liu, Shihching Jessica; Maina, Nyambura; Wade, Julie; Wang, Helen; Wilson, Heather; Wolanin, Natalie – Montgomery County Public Schools, 2021
The impact of the COVID-19 pandemic continues to overwhelm the functioning and outcomes of educational systems throughout the nation. The public education system is under particular scrutiny given that students, families, and educators are under considerable stress to maintain academic progress. Since the beginning of the crisis, school-systems…
Descriptors: Achievement Tests, COVID-19, Pandemics, Public Schools
Jacob, Brian A. – Center on Children and Families at Brookings, 2016
Contrary to popular belief, modern cognitive assessments--including the new Common Core tests--produce test scores based on sophisticated statistical models rather than the simple percent of items a student answers correctly. While there are good reasons for this, it means that reported test scores depend on many decisions made by test designers,…
Descriptors: Scores, Common Core State Standards, Test Length, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Runco, Mark A.; Walczyk, Jeffrey John; Acar, Selcuk; Cowger, Ernest L.; Simundson, Melissa; Tripp, Sunny – Journal of Creative Behavior, 2014
This article describes an empirical refinement of the "Runco Ideational Behavior Scale" (RIBS). The RIBS seems to be associated with divergent thinking, and the potential for creative thinking, but it was possible that its validity could be improved. With this in mind, three new scales were developed and the unique benefit (or…
Descriptors: Behavior Rating Scales, Creative Thinking, Test Validity, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Lathrop, Quinn N.; Cheng, Ying – Journal of Educational Measurement, 2014
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
Descriptors: Cutting Scores, Classification, Computation, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Doskey, Elena M.; Lagunas, Brenda; SooHoo, Michelle; Lomax, Amanda; Bullick, Stephanie – Journal of Psychoeducational Assessment, 2013
The Speed DIAL-4 was developed from the Developmental Indicators for the Assessment of Learning, Fourth Edition (DIAL-4), a screening designed to identify children between the ages of 2 years, 6 months through 5 years, 11 months "who are in need of intervention or diagnostic assessment in the following areas: motor, concepts, language,…
Descriptors: Screening Tests, Young Children, Test Length, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Stucky, Brian D.; Thissen, David; Edelen, Maria Orlando – Applied Psychological Measurement, 2013
Test developers often need to create unidimensional scales from multidimensional data. For item analysis, "marginal trace lines" capture the relation with the general dimension while accounting for nuisance dimensions and may prove to be a useful technique for creating short-form tests. This article describes the computations needed to obtain…
Descriptors: Test Construction, Test Length, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Willse, John T.; Allen, Nancy L.; Klaric, John S. – Educational and Psychological Measurement, 2011
The Mantel-Haenszel procedure is a popular technique for determining items that may exhibit differential item functioning (DIF). Numerous studies have focused on the strengths and weaknesses of this procedure, but few have focused the performance of the Mantel-Haenszel method when structurally missing data are present as a result of test booklet…
Descriptors: Test Bias, Identification, Tests, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9