NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akbay, Tuncer; Akbay, Lokman; Erol, Osman – Malaysian Online Journal of Educational Technology, 2021
Integration of e-learning and computerized assessments into many levels of educational programs has been increasing as digital technology progresses. Due to a handful of prominent advantages of computer-based-testing (CBT), a rapid transition in test administration mode from paper-based-testing (PBT) to CBT has emerged. Recently, many national and…
Descriptors: Computer Assisted Testing, Testing, High Stakes Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Gu, Lin; Ling, Guangming; Liu, Ou Lydia; Yang, Zhitong; Li, Guirong; Kardanova, Elena; Loyalka, Prashant – Assessment & Evaluation in Higher Education, 2021
We examine the effects of computer-based versus paper-based assessment of critical thinking skills, adapted from English (in the U.S.) to Chinese. Using data collected based on a random assignment between the two modes in multiple Chinese colleges, we investigate mode effects from multiple perspectives: mean scores, measurement precision, item…
Descriptors: Critical Thinking, Tests, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Storme, Martin; Myszkowski, Nils; Baron, Simon; Bernard, David – Journal of Intelligence, 2019
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in…
Descriptors: Intelligence Tests, Item Response Theory, Comparative Analysis, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Philip; Tymms, Peter – Journal of Research in Science Teaching, 2011
Previously, a small scale, interview-based, 3-year longitudinal study (ages 11-14) in one school had suggested a learning progression related to the concept of a substance. This article presents the results of a large-scale, cross-sectional study which used Rasch modeling to test the hypothesis of the learning progression. Data were collected from…
Descriptors: Computer Assisted Testing, Chemistry, Measures (Individuals), Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yip, Chi Kwong; Man, David W. K. – International Journal of Rehabilitation Research, 2009
This study investigates the validity of a newly developed computerized cognitive assessment system (CCAS) that is equipped with rich multimedia to generate simulated testing situations and considers both test item difficulty and the test taker's ability. It is also hypothesized that better predictive validity of the CCAS in self-care of persons…
Descriptors: Test Items, Content Validity, Predictive Validity, Patients
Peer reviewed Peer reviewed
Wainer, Howard – Journal of College Admissions, 1983
Discusses changes in testing as a result of the availability of extensive inexpensive computing and some recent developments in statistical test theory. Describes the role of the Computerized Adaptive Test (CAT) and modern Item Response Theory (IRT) in ability testing tailored to each student's knowledge and ability. (JAC)
Descriptors: Cognitive Ability, College Entrance Examinations, Computer Assisted Testing, Higher Education