Publication Date
| In 2026 | 0 |
| Since 2025 | 29 |
| Since 2022 (last 5 years) | 168 |
| Since 2017 (last 10 years) | 329 |
| Since 2007 (last 20 years) | 613 |
Descriptor
| Computer Assisted Testing | 1057 |
| Test Items | 1057 |
| Adaptive Testing | 448 |
| Test Construction | 385 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose – Applied Psychological Measurement, 2010
In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…
Descriptors: Test Items, Simulation, Adaptive Testing, Item Analysis
Dolan, Robert P.; Burling, Kelly; Harms, Michael; Strain-Seymour, Ellen; Way, Walter; Rose, David H. – Pearson, 2013
The increased capabilities offered by digital technologies offer new opportunities to evaluate students' deeper knowledge and skills and on constructs that are difficult to measure using traditional methods. Such assessments can also incorporate tools and interfaces that improve accessibility for diverse students, as well as inadvertently…
Descriptors: Educational Technology, Technology Uses in Education, Access to Education, Evaluation Methods
Laborda, Jesus Garcia; Bakieva, Margarita; Gonzalez-Such, Jose; Pavon, Ana Sevilla – Online Submission, 2010
Since the Spanish Educational system is changing and promoting the use of online tests, it was necessary to study the transformation of test items in the "Spanish University Entrance Examination" (IB P.A.U.) to diminish the effect of test delivery changes (through its computerization) in order to affect the least the current model. The…
Descriptors: Foreign Countries, College Entrance Examinations, Computer Assisted Testing, Test Items
Lin, Chuan-Ju – Journal of Technology, Learning, and Assessment, 2010
Assembling equivalent test forms with minimal test overlap across forms is important in ensuring test security. Chen and Lei (2009) suggested a exposure control technique to control test overlap-ordered item pooling on the fly based on the essence that test overlap rate--ordered item pooling for the first t examinees is a function of test overlap…
Descriptors: Test Length, Test Format, Evaluation Criteria, Psychometrics
Taherbhai, Husein; Seo, Daeryong; Bowman, Trinell – British Educational Research Journal, 2012
Literature in the United States provides many examples of no difference in student achievement when measured against the mode of test administration i.e., paper-pencil and online versions of the test. However, most of these researches centre on "regular" students who do not require differential teaching methods or different evaluation…
Descriptors: Learning Disabilities, Statistical Analysis, Teaching Methods, Test Format
O'Sullivan, Timothy P.; Hargaden, Gra´inne C. – Journal of Chemical Education, 2014
This article describes the development and implementation of an open-access organic chemistry question bank for online tutorials and assessments at University College Cork and Dublin Institute of Technology. SOCOT (structure-based organic chemistry online tutorials) may be used to supplement traditional small-group tutorials, thereby allowing…
Descriptors: Organic Chemistry, Tutorial Programs, Online Courses, Error Correction
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Flowers, Claudia; Kim, Do-Hong; Lewis, Preston; Davis, Violeta Carmen – Journal of Special Education Technology, 2011
This study examined the academic performance and preference of students with disabilities for two types of test administration conditions, computer-based testing (CBT) and pencil-and-paper testing (PPT). Data from a large-scale assessment program were used to examine differences between CBT and PPT academic performance for third to eleventh grade…
Descriptors: Testing, Test Items, Effect Size, Computer Assisted Testing
Kramer, Jessica M.; Coster, Wendy J.; Kao, Ying-Chia; Snow, Anne; Orsmond, Gael I. – Physical & Occupational Therapy in Pediatrics, 2012
The use of current adaptive behavior measures in practice and research is limited by their length and need for a professional interviewer. There is a need for alternative measures that more efficiently assess adaptive behavior in children and youth with autism spectrum disorders (ASDs). The Pediatric Evaluation of Disability Inventory-Computer…
Descriptors: Feedback (Response), Autism, Focus Groups, Computer Assisted Testing
Kim, Do-Hong; Huynh, Huynh – Educational Assessment, 2010
This study investigated whether scores obtained from the online and paper-and-pencil administrations of the statewide end-of-course English test were equivalent for students with and without disabilities. Score comparability was evaluated by examining equivalence of factor structure (measurement invariance) and differential item and bundle…
Descriptors: Computer Assisted Testing, Language Tests, English, Scores
Chang, Yuan-chin Ivan; Lu, Hung-Yi – Psychometrika, 2010
Item calibration is an essential issue in modern item response theory based psychological or educational testing. Due to the popularity of computerized adaptive testing, methods to efficiently calibrate new items have become more important than that in the time when paper and pencil test administration is the norm. There are many calibration…
Descriptors: Test Items, Educational Testing, Adaptive Testing, Measurement
Cheng, Ying; Chang, Hua-Hua; Douglas, Jeffrey; Guo, Fanmin – Educational and Psychological Measurement, 2009
a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
An Investigation of Scale Drift for Arithmetic Assessment of ACCUPLACER®. Research Report No. 2010-2
Deng, Hui; Melican, Gerald – College Board, 2010
The current study was designed to extend the current literature to study scale drift in CAT as part of improving quality control and calibration process for ACCUPLACER, a battery of large-scale adaptive placement tests. The study aims to evaluate item parameter drift using empirical data that span four years from the ACCUPLACER Arithmetic…
Descriptors: Student Placement, Adaptive Testing, Computer Assisted Testing, Mathematics Tests
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Shahnazari-Dorcheh, Mohammadtaghi; Roshan, Saeed – English Language Teaching, 2012
Due to the lack of span test for the use in language-specific and cross-language studies, this study provides L1 and L2 researchers with a reliable language-independent span test (math span test) for the measurement of working memory capacity. It also describes the development, validation, and scoring method of this test. This test included 70…
Descriptors: Language Research, Native Language, Second Language Learning, Scoring

Peer reviewed
Direct link
