Publication Date
| In 2026 | 0 |
| Since 2025 | 200 |
| Since 2022 (last 5 years) | 1070 |
| Since 2017 (last 10 years) | 2580 |
| Since 2007 (last 20 years) | 4941 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Jakwerth, Pamela M.; Stancavage, Frances B. – 2003
This study explored potential reasons behind student omission of responses to assessment questions. Understanding why students fail to answer certain questions may help inform the proper treatment of missing data during the estimation of item parameters and achievement distributions. The study was exploratory, small in scope, and qualitative in…
Descriptors: Elementary Secondary Education, Interviews, Junior High School Students, Junior High Schools
Weissman, Alexander – 2003
This study investigated the efficiency of item selection in a computerized adaptive test (CAT), where efficiency was defined in terms of the accumulated test information at an examinee's true ability level. A simulation methodology compared the efficiency of 2 item selection procedures with 5 ability estimation procedures for CATs of 5, 10, 15,…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Maximum Likelihood Statistics
Swygert, Kimberly A. – 2003
In this study, data from an operational computerized adaptive test (CAT) were examined in order to gather information concerning item response times in a CAT environment. The CAT under study included multiple-choice items measuring verbal, quantitative, and analytical reasoning. The analyses included the fitting of regression models describing the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Participant Characteristics
van der Linden, Wim J. – 2002
The Sympson and Hetter (SH; J. Sympson and R. Hetter; 1985; 1997) method is a method of probabilistic item exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
van der Linden, Wim J.; Veldkamp, Bernard P. – 2002
Item-exposure control in computerized adaptive testing is implemented by imposing item-ineligibility constraints on the assembly process of the shadow tests. The method resembles J. Sympson and R. Hetter's (1985) method of item-exposure control in that the decisions to impose the constraints are probabilistic. However, the method does not require…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
Berk, Eric J. Vanden; Lohman, David F.; Cassata, Jennifer Coyne – 2001
Assessing the construct relevance of mental test results continues to present many challenges, and it has proven to be particularly difficult to assess the construct relevance of verbal items. This study was conducted to gain a better understanding of the conceptual sources of verbal item difficulty using a unique approach that integrates…
Descriptors: College Students, Construct Validity, Higher Education, Item Response Theory
Zenisky, April L.; Hambleton, Ronald K.; Sireci, Stephen G. – 2001
Measurement specialists routinely assume examinee responses to test items are independent of one another. However, previous research has shown that many contemporary tests contain item dependencies and not accounting for these dependencies leads to misleading estimates of item, test, and ability parameters. In this study, methods for detecting…
Descriptors: Ability, College Applicants, College Entrance Examinations, Higher Education
Lee, Yong-Won – 2000
This paper reports the results of an analysis of a reading comprehension test using the Q subscript 3 (Q3) statistics developed by W. Yen (1984). Yen's Q3 can be a useful tool for examining local item dependence in the context of a reading comprehension test in which a set of related items is followed by a reading passage. Q3 is basically a…
Descriptors: Factor Analysis, Foreign Countries, High School Students, High Schools
DeVito, Pasquale J., Ed.; Koenig, Judith A., Ed. – 2001
A committee of the National Research Council studied the desirability, feasibility, and potential impact of two reporting practices for National Assessment of Educational Progress (NAEP) results: district-level reporting and market-basket reporting. NAEP's sponsors believe that reporting district-level NAEP results would support state and local…
Descriptors: Elementary Secondary Education, Research Methodology, Research Reports, School Districts
Schnipke, Deborah L.; Roussos, Louis A.; Pashley, Peter J. – 2000
Differential item functioning (DIF) analyses are conducted to investigate how items function in various subgroups. The Mantel-Haenszel (MH) DIF statistic is used at the Law School Admission Council and other testing companies. When item functioning can be well-described in terms of a one- or two-parameter logistic item response theory (IRT) model…
Descriptors: College Entrance Examinations, Comparative Analysis, Item Bias, Item Response Theory
PDF pending restorationGreen, Bert F. – 2002
Maximum likelihood and Bayesian estimates of proficiency, typically used in adaptive testing, use item weights that depend on test taker proficiency to estimate test taker proficiency. In this study, several methods were explored through computer simulation using fixed item weights, which depend mainly on the items difficulty. The simpler scores…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Plumer, Gilbert E. – 2000
In the context of examining the feasibility and advisability of computerizing the Law School Admission Test (LSAT), a review of current literature was conducted with the following goals: (1) determining the skills that are most important in good legal reasoning according to the literature; (2) determining the extent to which existing LSAT item…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
Schulz, E. Matthew; Wang, Lin – 2001
In this study, items were drawn from a full-length test of 30 items in order to construct shorter tests for the purpose of making accurate pass/fail classifications with regard to a specific criterion point on the latent ability metric. A three-item parameter Item Response Theory (IRT) framework was used. The criterion point on the latent ability…
Descriptors: Ability, Classification, Item Response Theory, Pass Fail Grading
Parshall, Cynthia G.; Kromrey, Jeffrey D.; Harmes, J. Christine; Sentovich, Christina – 2001
Computerized adaptive tests (CATs) are efficient because of their optimal item selection procedures that target maximally informative items at each estimated ability level. However, operational administration of these optimal CATs results in a relatively small subset of items given to examinees too often, while another portion of the item pool is…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Ediger, Marlow – 2001
To assure the fair and honest grading of student achievement, validity and reliability are key to writing test items. Clarity in writing each item is essential. Multiple procedures of assessing the achievement of university students should be implemented, and instructors and professors should be held accountable for the fair and honest grading of…
Descriptors: Academic Achievement, College Students, Educational Technology, Grades (Scholastic)


