NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
van Rijn, Peter W.; Attali, Yigal; Ali, Usama S. – Journal of Experimental Education, 2023
We investigated whether and to what extent different scoring instructions, timing conditions, and direct feedback affect performance and speed. An experimental study manipulating these factors was designed to address these research questions. According to the factorial design, participants were randomly assigned to one of twelve study conditions.…
Descriptors: Scoring, Time, Feedback (Response), Performance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bejar, Isaac I.; Deane, Paul D.; Flor, Michael; Chen, Jing – ETS Research Report Series, 2017
The report is the first systematic evaluation of the sentence equivalence item type introduced by the "GRE"® revised General Test. We adopt a validity framework to guide our investigation based on Kane's approach to validation whereby a hierarchy of inferences that should be documented to support score meaning and interpretation is…
Descriptors: College Entrance Examinations, Graduate Study, Generalization, Inferences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oliveri, Maria Elena; Lawless, Rene; Molloy, Hillary – ETS Research Report Series, 2017
The literature and the employee and workforce surveys rank collaborative problem solving (CPS) among the top 5 most critical skills necessary for success in college and the workforce. This paper provides a review of the literature on CPS and related terms, including a discussion of their definitions, importance to higher education and workforce…
Descriptors: Cooperative Learning, Problem Solving, College Readiness, Career Readiness
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Educational Measurement: Issues and Practice, 2012
Views on testing--its purpose and uses and how its data are analyzed--are related to one's perspective on test takers. Test takers can be viewed as learners, examinees, or contestants. I briefly discuss the perspective of test takers as learners. I maintain that much of psychometrics views test takers as examinees. I discuss test takers as a…
Descriptors: Testing, Test Theory, Item Response Theory, Test Reliability
Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing – Online Submission, 2007
Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…
Descriptors: Foreign Countries, Psychometrics, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Powers, Don; Hawthorn, John – ETS Research Report Series, 2008
Registered examinees for the GRE® General Test answered open-ended sentence-completion items. For half of the items, participants received immediate feedback on the correctness of their answers and up to two opportunities to revise their answers. A significant feedback-and-revision effect was found. Participants were able to correct many of their…
Descriptors: College Entrance Examinations, Graduate Study, Sentences, Psychometrics
Kingston, Neal M.; Dorans, Neil J. – 1982
The feasibility of using item response theory (IRT) as a psychometric model for the Graduate Record Examination (GRE) Aptitude Test was addressed by assessing the reasonableness of the assumptions of item response theory for GRE item types and examinee populations. Items from four forms and four administrations of the GRE Aptitude Test were…
Descriptors: Aptitude Tests, Graduate Study, Higher Education, Latent Trait Theory
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.; Singley, Mark K.; Katz, Irvin R.; Nhouyvanisvong, Adisack – Journal of Educational Measurement, 1999
Evaluated a computer-delivered response type for measuring quantitative skill, the "Generating Examples" (GE) response type, which presents under-determined problems that can have many right answers. Results from 257 graduate students and applicants indicate that GE scores are reasonably reliable, but only moderately related to Graduate…
Descriptors: College Applicants, Computer Assisted Testing, Graduate Students, Graduate Study
PDF pending restoration PDF pending restoration
Bennett, Randy Elliot; And Others – 1986
The psychometric characteristics of the Graduate Record Examinations General Test (GRE-GT) were studied for three handicapped groups. Experimental subjects took the GRE-GT between October 1981 and June 1984; they include: (1) 151 visually-impaired students taking large-type, extended-time administrations; (2) 188 visually-impaired students taking…
Descriptors: College Entrance Examinations, Comparative Analysis, Graduate Study, Higher Education
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Special Education, 1987
This study examined the score level, extent of test completion, and test reliability for visually impaired, physically handicapped, and nonhandicapped groups taking the Graduate Record Examinations General Test. Results included the finding that performance of visually handicapped groups approximated that of nondisabled examinees, although…
Descriptors: College Admission, College Entrance Examinations, College Students, Graduate Study
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sinharay, Sandip; Johnson, Matthew – ETS Research Report Series, 2005
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate/produce items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996; Bejar, 2002). They have the potential to produce large number of high-quality items at reduced cost. This paper introduces…
Descriptors: Item Analysis, Test Items, Scoring, Psychometrics
Rock, D. A.; And Others – 1982
The study evaluated the invariance of the construct validity and thus the interpretation of Graduate Record Examinations (GRE) Aptitude Test scores. A systematic procedure for investigation of test bias from a construct validity frame of reference was developed and applied. Invariant construct validity was defined as similar patterns of loadings…
Descriptors: Black Students, College Entrance Examinations, Factor Structure, Graduate Study
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Graf, Edith Aurora; Peterson, Stephen; Steffen, Manfred; Lawless, René – ETS Research Report Series, 2005
We describe the item modeling development and evaluation process as applied to a quantitative assessment with high-stakes outcomes. In addition to expediting the item-creation process, a model-based approach may reduce pretesting costs, if the difficulty and discrimination of model-generated items may be predicted to a predefined level of…
Descriptors: Psychometrics, Accuracy, Item Analysis, High Stakes Tests
Previous Page | Next Page »
Pages: 1  |  2