Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
Applied Psychological… | 1 |
Educational Assessment | 1 |
Educational Technology &… | 1 |
Eurasian Journal of… | 1 |
Journal of Educational… | 1 |
Online Submission | 1 |
Physical Review Special… | 1 |
Author
Chang, Shu-Ren | 2 |
Ferdous, Abdullah A. | 2 |
Plake, Barbara S. | 2 |
Al-A'ali, Mansoor | 1 |
Bulut, Okan | 1 |
Green, Bert F. | 1 |
Jelden, D. L. | 1 |
Kan, Adnan | 1 |
Kortemeyer, Gerd | 1 |
Prestwood, J. Stephen | 1 |
Samejima, Fumiko | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Reports - Evaluative | 3 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Elementary Secondary Education | 1 |
Audience
Researchers | 1 |
Location
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kortemeyer, Gerd – Physical Review Special Topics - Physics Education Research, 2014
Item response theory (IRT) becomes an increasingly important tool when analyzing "big data" gathered from online educational venues. However, the mechanism was originally developed in traditional exam settings, and several of its assumptions are infringed upon when deployed in the online realm. For a large-enrollment physics course for…
Descriptors: Item Response Theory, Online Courses, Electronic Learning, Homework
Green, Bert F. – Applied Psychological Measurement, 2011
This article refutes a recent claim that computer-based tests produce biased scores for very proficient test takers who make mistakes on one or two initial items and that the "bias" can be reduced by using a four-parameter IRT model. Because the same effect occurs with pattern scores on nonadaptive tests, the effect results from IRT scoring, not…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Bias, Item Response Theory
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Ferdous, Abdullah A.; Plake, Barbara S.; Chang, Shu-Ren – Educational Assessment, 2007
The purpose of this study was to examine the effect of pretest items on response time in an operational, fixed-length, time-limited computerized adaptive test (CAT). These pretest items are embedded within the CAT, but unlike the operational items, are not tailored to the examinee's ability level. If examinees with higher ability levels need less…
Descriptors: Pretests Posttests, Reaction Time, Computer Assisted Testing, Test Items
Chang, Shu-Ren; Plake, Barbara S.; Ferdous, Abdullah A. – Online Submission, 2005
This study examined the time different ability level examinees spend taking a CAT on demanding items to these examinees. It was also found that high able examinees spend more time on the pretest items, which are not tailored to the examinees' ability level, than do lower ability examinees. Higher able examinees showed persistence with test…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Reaction Time
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation
Samejima, Fumiko – 1981
This is a continuation of a previous study in which a new method of estimating the operating characteristics of discrete item responses based upon an Old Test, which has a non-constant test information function, was tested upon each of two subtests of the original Old Test, Subtests 1 and 2. The results turned out to be quite successful. In the…
Descriptors: Academic Ability, Computer Assisted Testing, Estimation (Mathematics), Latent Trait Theory
Prestwood, J. Stephen; Weiss, David J. – 1978
Volunteer college students were assigned to one of six computer administered vocabulary tests, one half with immediate knowledge of results (KR) after responding to each item, and the other half without knowledge of results. The six tests were designed to be at one of three levels of difficulty and consisted either of 50 preselected items…
Descriptors: Academic Ability, Adaptive Testing, Anxiety, Computer Assisted Testing

Jelden, D. L. – Journal of Educational Technology Systems, 1988
Reviews study conducted to compare levels of achievement on final exams for college students responding to combinations of test-item feedback methods and modes of test-item presentation. The PHOENIX computer system used in the comparison is described, and the use of ACT (American College Testing Program) scores for ability comparison is discussed.…
Descriptors: Academic Ability, Academic Achievement, Achievement Tests, Analysis of Covariance