NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lopez, Alexis A.; Tolentino, Florencia – ETS Research Report Series, 2020
In this study we investigated how English learners (ELs) interacted with "®" summative English language arts (ELA) and mathematics items, the embedded online tools, and accessibility features. We focused on how EL students navigated the assessment items; how they selected or constructed their responses; how they interacted with the…
Descriptors: English Language Learners, Student Evaluation, Language Arts, Summative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuno, Fumiko; Nishimura, Keiichi; Negami, Seiya; Namikawa, Yukihiko – International Journal for Technology in Mathematics Education, 2019
Our study is on developing mathematics items for Computer-Based Testing (CBT) using Tablet PC. These items are subject-based items using interactive dynamic objects. The purpose of this study is to obtain some suggestions for further tasks drawing on field test results for developed items. First, we clarified the role of the interactive dynamic…
Descriptors: Mathematics Instruction, Mathematics Tests, Test Items, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Senarat, Somprasong; Tayraukham, Sombat; Piyapimonsit, Chatsiri; Tongkhambanjong, Sakesan – Educational Research and Reviews, 2013
The purpose of this research is to develop a multidimensional computerized adaptive test for diagnosing the cognitive process of grade 7 students in learning algebra by applying multidimensional item response theory. The research is divided into 4 steps: 1) the development of item bank of algebra, 2) the development of the multidimensional…
Descriptors: Foreign Countries, Mathematics Tests, Test Construction, Item Response Theory
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing
Wise, Lauress L.; And Others – 1989
The effects of item position on item statistics were studied in a large set of data from tests of word knowledge (WK) and arithmetic reasoning (AR). Position effects on item response theory (IRT) parameter estimates and classical item statistics were also investigated. Data were collected as part of a project to refine the Army's Computerized…
Descriptors: Armed Forces, Computer Assisted Testing, Item Analysis, Latent Trait Theory
Sheehan, Kathleen; Mislevy, Robert J. – 1994
The operating characteristics of 114 mathematics pretest items from the Praxis I: Computer Based Test were analyzed in terms of item attributes and test developers' judgments of item difficulty. Item operating characteristics were defined as the difficulty, discrimination, and asymptote parameters of a three parameter logistic item response theory…
Descriptors: Basic Skills, Computer Assisted Testing, Difficulty Level, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Puhan, Gautam; Boughton, Keith A.; Kim, Sooyeon – ETS Research Report Series, 2005
The study evaluated the comparability of two versions of a teacher certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). Standardized mean difference (SMD) and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that effect sizes…
Descriptors: Comparative Analysis, Test Items, Statistical Analysis, Teacher Certification