NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
National Defense Education Act1
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kirsch, Irwin; Lennon, Mary Louise – Large-scale Assessments in Education, 2017
As the largest and most innovative international assessment of adults, PIAAC marks an inflection point in the evolution of large-scale comparative assessments. PIAAC grew from the foundation laid by surveys that preceded it, and introduced innovations that have shifted the way we conceive and implement large-scale assessments. As the first fully…
Descriptors: International Assessment, Adults, Measurement, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Michelle M. Neumann; Jason L. Anthony; Noé A. Erazo; David L. Neumann – Grantee Submission, 2019
The framework and tools used for classroom assessment can have significant impacts on teacher practices and student achievement. Getting assessment right is an important component in creating positive learning experiences and academic success. Recent government reports (e.g., United States, Australia) call for the development of systems that use…
Descriptors: Early Childhood Education, Futures (of Society), Educational Assessment, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Reynolds, Matthew R.; Niileksela, Christopher R. – Journal of Psychoeducational Assessment, 2015
"The Woodcock-Johnson IV Tests of Cognitive Abilities" (WJ IV COG) is an individually administered measure of psychometric intellectual abilities designed for ages 2 to 90+. The measure was published by Houghton Mifflin Harcourt-Riverside in 2014. Frederick Shrank, Kevin McGrew, and Nancy Mather are the authors. Richard Woodcock, the…
Descriptors: Cognitive Tests, Testing, Scoring, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Educational Measurement: Issues and Practice, 2012
Views on testing--its purpose and uses and how its data are analyzed--are related to one's perspective on test takers. Test takers can be viewed as learners, examinees, or contestants. I briefly discuss the perspective of test takers as learners. I maintain that much of psychometrics views test takers as examinees. I discuss test takers as a…
Descriptors: Testing, Test Theory, Item Response Theory, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Montgomery, Janine Marie; Newton, Brendan; Smith, Christiane – Journal of Psychoeducational Assessment, 2008
The Gilliam Autism Rating Scale-Second Edition (GARS-2) is a screening tool for autism spectrum disorders for individuals between the ages of 3 and 22. It was designed to help differentiate those with autism from those with severe behavioral disorders as well as from those who are typically developing. It is a norm-referenced instrument that…
Descriptors: Autism, Rating Scales, Test Reviews, Norm Referenced Tests
Peer reviewed Peer reviewed
Ludlow, Larry H.; O'Leary, Michael – Educational and Psychological Measurement, 1999
Focuses on the practical effects of using different statistical treatments with omitted and not-reached items in an item-response theory application. The strategy selected for scoring such items has considerable impact on the interpretation of results for individual or group-level assessments. (Author/SLD)
Descriptors: Data Analysis, Item Response Theory, Scoring, Test Interpretation
Samejima, Fumiko – 1996
Traditionally, the test score represented by the number of items answered correctly was taken as an indicator of the examinee's ability level. Researchers still tend to think that the number-correct score is a way of ordering individuals with respect to the latent trait. The objective of this study is to depict the benefits of using ability…
Descriptors: Ability, Attitude Measures, Estimation (Mathematics), Models
Peer reviewed Peer reviewed
Crites, John O.; Savickas, Mark L. – Journal of Career Assessment, 1996
The Career Maturity Inventory was revised in 1995 using previously unpublished longitudinal data for item selection. The new inventory has 25 attitude and 25 competence items, each yielding a score that measures degree of career maturity of conative and cognitive variables, respectively. (SK)
Descriptors: Career Development, Measures (Individuals), Scoring, Test Interpretation
Livingston, Samuel A. – 1988
When test-takers are offered a choice of essay questions, some questions may be harder than others. If the test includes a common portion taken by all test-takers, an adjustment to the scores is possible. Previously proposed adjustment procedures disregard the test-makers' efforts to create questions of equal difficulty; these procedures tend to…
Descriptors: Advanced Placement, Correlation, Difficulty Level, Essays
Haenn, Joseph F. – 1981
Procedures for conducting functional level testing have been available for use by practitioners for some time. However, the Title I Evaluation and Reporting System (TIERS), developed in response to the educational amendments of 1974 to the Elementary and Secondary Education Act (ESEA), has provided the impetus for widespread adoption of this…
Descriptors: Achievement Tests, Difficulty Level, Scores, Scoring
Bradshaw, Charles W., Jr. – 1968
A method for determining invariant item parameters is presented, along with a scheme for obtaining test scores which are interpretable in terms of a common metric. The method assumes a unidimensional latent trait and uses a three parameter normal ogive model. The assumptions of the model are explored, and the methods for calculating the proposed…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Mathematical Models
Rhode Island State Dept. of Education, Providence. – 1999
In 1999, Rhode Island students in grades 3, 7, and 10 took a performance assessment in writing as part of the Rhode Island State Assessment Program. These two documents, a guide for educators and a guide for parents, describe the test and its scoring. The test is called a performance assessment because it requires the students to demonstrate their…
Descriptors: Elementary Secondary Education, High School Students, Parents, Performance Based Assessment
Previous Page | Next Page »
Pages: 1  |  2  |  3