NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)21
Source
Educational Testing Service21
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Santelices, Maria Veronica; Ugarte, Juan Jose; Flotts, Paulina; Radovic, Darinka; Kyllonen, Patrick – Educational Testing Service, 2011
This paper presents the development and initial validation of new measures of critical thinking and noncognitive attributes that were designed to supplement existing standardized tests used in the admissions system for higher education in Chile. The importance of various facets of this process, including the establishment of technical rigor and…
Descriptors: Foreign Countries, College Entrance Examinations, Test Construction, Test Validity
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Dorans, Neil J. – Educational Testing Service, 2010
Santelices and Wilson (2010) claimed to have addressed technical criticisms of Freedle (2003) presented in Dorans (2004a) and elsewhere. Santelices and Wilson's abstract claimed that their study confirmed that SAT[R] verbal items do function differently for African American and White subgroups. In this commentary, I demonstrate that the…
Descriptors: College Entrance Examinations, Verbal Tests, Test Bias, Test Items
Dorans, Neil J.; Liang, Longjuan; Puhan, Gautam – Educational Testing Service, 2010
Scores are the most visible and widely used products of a testing program. The choice of score scale has implications for test specifications, equating, and test reliability and validity, as well as for test interpretation. At the same time, the score scale should be viewed as infrastructure likely to require repair at some point. In this report…
Descriptors: Testing Programs, Standard Setting (Scoring), Test Interpretation, Certification
Ricker-Pedley, Kathryn L. – Educational Testing Service, 2011
A pseudo-experimental study was conducted to examine the link between rater accuracy calibration performances and subsequent accuracy during operational scoring. The study asked 45 raters to score a 75-response calibration set and then a 100-response (operational) set of responses from a retired Graduate Record Examinations[R] (GRE[R]) writing…
Descriptors: Scoring, Accuracy, College Entrance Examinations, Writing Tests
Martin-Raugh, Michelle P.; Reese, Clyde M.; Tannenbaum, Richard J.; Steinberg, Jonathan H.; Xu, Jun – Educational Testing Service, 2016
The purpose of this study is to explore the validity evidence supporting the high-leverage practices (HLPs) of the ETS® National Observational Teaching Exam (NOTE) assessment series, a kindergarten through 6th grade teacher licensure assessment. HLPs include "tasks and activities that are essential for skillful beginning teachers to…
Descriptors: Beginning Teachers, Elementary School Teachers, Teaching Skills, Educational Practices
Liu, Ou Lydia; Schedl, Mary; Malloy, Jeanne; Kong, Nan – Educational Testing Service, 2009
The TOEFL iBT[TM] has increased the length of the reading passages in the reading section compared to the passages on the TOEFL[R] computer-based test (CBT) to better approximate academic reading in North American universities, resulting in a reduced number of passages in the reading test. A concern arising from this change is whether the decrease…
Descriptors: English (Second Language), Language Tests, Internet, Computer Assisted Testing
Ling, Guangming; Rijmen, Frank – Educational Testing Service, 2011
The factorial structure of the Time Management (TM) scale of the Student 360: Insight Program (S360) was evaluated based on a national sample. A general procedure with a variety of methods was introduced and implemented, including the computation of descriptive statistics, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).…
Descriptors: Time Management, Measures (Individuals), Statistical Analysis, Factor Analysis
Sawaki, Yasuyo; Nissan, Susan – Educational Testing Service, 2009
The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…
Descriptors: Test Validity, Language Tests, English (Second Language), Computer Assisted Testing
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Swain, Merrill; Huang, Li-Shih; Barkaoui, Khaled; Brooks, Lindsay; Lapkin, Sharon – Educational Testing Service, 2009
This study responds to the Test of English as a Foreign Language[TM] (TOEFL[R]) research agenda concerning the need to understand the processes and knowledge that test-takers utilize. Specifically, it investigates the strategic behaviors test-takers reported using when taking the Speaking section of the TOEFL iBT[TM] (SSTiBT). It also investigates…
Descriptors: English (Second Language), Language Tests, Internet, Speech Skills
Educational Testing Service, 2010
This document describes the breadth of the research that the ETS (Educational Testing Service) Research & Development division is conducting in 2010. This portfolio will be updated in early 2011 to reflect changes to existing projects and new projects that were added after this document was completed. The research described in this portfolio falls…
Descriptors: Portfolios (Background Materials), Testing Programs, Educational Testing, Private Agencies
Millett, Catherine M.; Payne, David G.; Dwyer, Carol A.; Stickler, Leslie M.; Alexiou, Jon J. – Educational Testing Service, 2008
This paper presents a framework that institutions of higher education can use to improve, revise and introduce comprehensive systems for the collection and dissemination of information on student learning outcomes. For faculty and institutional leaders grappling with the many issues and nuances inherent in assessing student learning, the framework…
Descriptors: Higher Education, Educational Testing, Accountability, Outcomes of Education
Liu, Ou Lydia – Educational Testing Service, 2009
As college tuitions and fees continue to grow, students, parents and public policymakers are interested in understanding how public universities operate and whether their investments are well-utilized. Accountability in public higher education has come into focus following the attention accountability has received in K-12 education. Against this…
Descriptors: Higher Education, State Universities, State Colleges, Accountability
O'Reilly, Tenaha; Sheehan, Kathleen M. – Educational Testing Service, 2009
This paper presents the rationale and research base for a reading competency model designed to guide the development of cognitively based assessment of reading comprehension. The model was developed from a detailed review of the cognitive research on reading and learning and a review of state standards for language arts. A survey of the literature…
Descriptors: Reading Skills, Reading Comprehension, Speech, State Standards
Previous Page | Next Page »
Pages: 1  |  2