NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,641 to 5,655 of 9,547 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zenisky, April L.; Hambleton, Ronald K.; Robin, Frederic – Educational Assessment, 2004
Differential item functioning (DIF) analyses are a routine part of the development of large-scale assessments. Less common are studies to understand the potential sources of DIF. The goals of this study were (a) to identify gender DIF in a large-scale science assessment and (b) to look for trends in the DIF and non-DIF items due to content,…
Descriptors: Program Effectiveness, Test Format, Science Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hoijtink, Herbert; Notenboom, Annelise – Psychometrika, 2004
There are two main theories with respect to the development of spelling ability: the stage model and the model of overlapping waves. In this paper exploratory model based clustering will be used to analyze the responses of more than 3500 pupils to subsets of 245 items. To evaluate the two theories, the resulting clusters will be ordered along a…
Descriptors: Spelling, Multivariate Analysis, Data Analysis, Skill Development
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Alonzo, Alicia C.; Schwab, Cheryl; Wilson, Mark – Educational Assessment, 2006
In this article we describe the development, analysis, and interpretation of a novel item format we call Ordered Multiple-Choice (OMC). A unique feature of OMC items is that they are linked to a model of student cognitive development for the construct being measured. Each of the possible answer choices in an OMC item is linked to developmental…
Descriptors: Diagnostic Tests, Multiple Choice Tests, Cognitive Development, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chase, Margaret E. – New Educator, 2006
This essay discusses the author's encounter with PRAXIS II test preparation questions that distort and misrepresent complex reading processes. Three sample test items are presented and discussed.
Descriptors: Test Items, Teacher Competency Testing, Reading Teachers, Test Coaching
Peer reviewed Peer reviewed
Direct linkDirect link
Hautau, Briana; Turner, Haley C.; Carroll, Erin; Jaspers, Kathryn; Krohn, Katy; Parker, Megan; Williams, Robert L. – Journal of Behavioral Education, 2006
Students (N=153) in three equivalent sections of an undergraduate human development course compared pairs of related concepts via either written or oral discussion at the beginning of most class sessions. A writing-for-random-credit section achieved significantly higher ratings on the writing activities than did a writing-for-no-credit section.…
Descriptors: Writing Exercises, Multiple Choice Tests, Undergraduate Study, Credits
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sawaki, Yasuyo; Stricker, Lawrence; Oranje, Andreas – ETS Research Report Series, 2008
The present study investigated the factor structure of a field trial sample of the Test of English as a Foreign Language™ Internet-based test (TOEFL® iBT). An item-level confirmatory factor analysis (CFA) was conducted for a polychoric correlation matrix of items on a test form completed by 2,720 participants in the 2003-2004 TOEFL iBT Field…
Descriptors: Factor Structure, Computer Assisted Testing, Multitrait Multimethod Techniques, Scores
Stotsky, Sandra – Education Working Paper Archive, 2008
The 1998 reauthorization of the Higher Education Act requires all states to report annually to the U.S. Department of Education the number of prospective teachers at each teacher training institution who pass their own state tests for licensure. However, the law left decisions on what tests to require in each field, what to assess on them, and…
Descriptors: Mathematics Education, Test Items, Educational Testing, Special Education Teachers
US Citizenship and Immigration Services, 2008
"Naturalization Test Redesign Project: Civics Item Selection Analysis" provides an overview of the development of content items for the U.S. history and government (civics) portion of the redesigned naturalization test. This document also reviews the process used to gather and analyze data from multiple studies to determine which civics…
Descriptors: History, Test Items, Citizenship, Individual Testing
Liu, Kimy; Carling, Kristy; Geller, Leanne Ketterlin; Tindal, Gerald – Behavioral Research and Teaching, 2008
In this study, we describe the development of rapid reading measures, sentences presented to students in a nearly subliminal manner, with a literal comprehension question asked following their removal. After administering alternate forms of these measures to students, we present the results from three statistical analyses to ascertain their…
Descriptors: Test Construction, Speed Reading, Reading Rate, Sentences
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Min-Young – Language Testing, 2008
This paper concerns the divisibility of comprehension subskills measured in L2 listening and reading tests. Motivated by the administration of the new Web-based English as a Second Language Placement Exam (WB-ESLPE) at UCLA, this study addresses the following research questions: first, to what extent do the WB-ESLPE listening and reading items…
Descriptors: Structural Equation Models, Second Language Learning, Reading Tests, Inferences
Bridgeman, Brent; Laitusis, Cara Cahalan; Cline, Frederick – College Board, 2007
The current study used three data sources to estimate time requirements for different item types on the now current SAT Reasoning Test™. First, we estimated times from a computer-adaptive version of the SAT® (SAT CAT) that automatically recorded item times. Second, we observed students as they answered SAT questions under strict time limits and…
Descriptors: College Entrance Examinations, Test Items, Thinking Skills, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Spaan, Mary – Language Assessment Quarterly, 2007
This article follows the development of test items (see "Language Assessment Quarterly", Volume 3 Issue 1, pp. 71-79 for the article "Test and Item Specifications Development"), beginning with a review of test and item specifications, then proceeding to writing and editing of items, pretesting and analysis, and finally selection of an item for a…
Descriptors: Test Items, Test Construction, Responses, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Cecen, Ayse Rezan – Educational Sciences: Theory and Practice, 2007
The purpose of this study is to investigate validity and reliability of Short Form of The Family Sense of Coherence Scale's which was developed originally 26 items by Antonovsky and Sourani (1988) and 12 items short form by Sagy (1998). The scale measures individuals' perception of Family Sense of Coherence and it can be applied to adolescents and…
Descriptors: Undergraduate Students, Test Reliability, Test Validity, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Arendasy, Martin; Sommer, Markus – Learning and Individual Differences, 2007
This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…
Descriptors: Schemata (Cognition), Test Items, Intelligent Tutoring Systems, Construct Validity
Badgett, John L.; Christmann, Edwin P. – Corwin, 2009
While today's curriculum is largely driven by standards, many teachers find the lack of specificity in the standards to be confounding and even intimidating. Now this practical book provides middle and high school teachers with explicit guidance on designing specific objectives and developing appropriate formative and summative assessments to…
Descriptors: Test Items, Student Evaluation, Knowledge Level, National Standards
Pages: 1  |  ...  |  373  |  374  |  375  |  376  |  377  |  378  |  379  |  380  |  381  |  ...  |  637