NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozdemir, Gulden; Ozdemir, Atilla; Gelbal, Selahattin – International Journal of Assessment Tools in Education, 2021
This study aims at determining the cyber accessibility of teacher-made exams by analyzing the teachers', students', and parents' views on this subject. To fulfill this purpose, 60 exam papers in 4 different courses, including Turkish, Mathematics, Science, Social Studies/Atatürk's Principles and Revolutions, were examined through the technique of…
Descriptors: Teacher Made Tests, Test Format, Computer Assisted Testing, Foreign Countries
New York State Education Department, 2024
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts, Mathematics, and Grades 5 & 8 Science Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed…
Descriptors: Testing Programs, Language Arts, Mathematics Tests, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Morrison, Kristin M. – Educational Assessment, 2019
Assessment items are commonly field tested prior to operational use to observe statistical item properties such as difficulty. Item parameter estimates from field testing may be used to assign scores via pre-equating or computer adaptive designs. This study examined differences between item difficulty estimates based on field test and operational…
Descriptors: Field Tests, Test Items, Statistics, Difficulty Level
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lopez, Alexis A.; Guzman-Orth, Danielle; Zapata-Rivera, Diego; Forsyth, Carolyn M.; Luce, Christine – ETS Research Report Series, 2021
Substantial progress has been made toward applying technology enhanced conversation-based assessments (CBAs) to measure the English-language proficiency of English learners (ELs). CBAs are conversation-based systems that use conversations among computer-animated agents and a test taker. We expanded the design and capability of prior…
Descriptors: Accuracy, English Language Learners, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koretz, Daniel; Yu, Carol; Mbekeani, Preeya P.; Langi, Meredith; Dhaliwal, Tasmin; Braslow, David – AERA Open, 2016
The current focus on assessing "college and career readiness" raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of…
Descriptors: Predictor Variables, College Freshmen, Grade Point Average, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Rhodes, Katherine T.; Branum-Martin, Lee; Washington, Julie A.; Fuchs, Lynn S. – Journal of Educational Psychology, 2017
Using multitrait, multimethod data, and confirmatory factor analysis, the current study examined the effects of arithmetic item formatting and the possibility that across formats, abilities other than arithmetic may contribute to children's answers. Measurement hypotheses were guided by several leading theories of arithmetic cognition. With a…
Descriptors: Arithmetic, Mathematics Tests, Test Format, Psychometrics
Kim, Sooyeon; Walker, Michael E. – Educational Testing Service, 2011
This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…
Descriptors: Test Format, Multiple Choice Tests, Test Items, Gender Differences
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability
Siskind, Teri G.; Rose, Janet S. – 1986
The Charleston County School District (CCSD) has recently begun development of criterion-referenced tests (CRT) in different subject areas and for different grade levels. This paper outlines the process that CCSD followed in the development of math and language arts tests for grades one through eight and area exams for required high school…
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Educational Objectives, Educational Testing