NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2016
Meijer and van Krimpen-Stoop noted that the number of person-fit statistics (PFSs) that have been designed for computerized adaptive tests (CATs) is relatively modest. This article partially addresses that concern by suggesting three new PFSs for CATs. The statistics are based on tests for a change point and can be used to detect an abrupt change…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Fomenko, Julie Ann Schwein – ProQuest LLC, 2017
Twenty-first-century healthcare is a complex and demanding arena. Today's hospital environment is more complex than in previous years while patients move through the system at a much faster pace. Newly graduated nurses are challenged in their first year with the healthcare needs of complex patients. Nurse educators and nurse leaders differ in…
Descriptors: Simulation, Nurses, Nursing Education, Competence
Peer reviewed Peer reviewed
Direct linkDirect link
Faraci, Palmira; Hell, Benedikt; Schuler, Heinz – Creativity Research Journal, 2016
This article describes the psychometric properties of the Italian adaptation of the "Analyse des Schlussfolgernden und Kreativen Denkens" (ASK; Test of Inferential and Creative Thinking) for measuring inferential and creative thinking. The study aimed to (a) supply evidence for the factorial structure of the instrument, (b) describe its…
Descriptors: Creative Thinking, Psychometrics, Adaptive Testing, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Sireci, Stephen G. – International Journal of Testing, 2014
The International Test Commission's "Guidelines for Translating and Adapting Tests" (2010) provide important guidance on developing and evaluating tests for use across languages. These guidelines are widely applauded, but the degree to which they are followed in practice is unknown. The objective of this study was to perform a…
Descriptors: Guidelines, Translation, Adaptive Testing, Second Languages
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Chang, Hua-Hua; Huebner, Alan – Journal of Educational Measurement, 2011
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
Descriptors: Test Items, Adaptive Testing, Computer Assisted Testing, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Swartz, Richard J. – Applied Psychological Measurement, 2009
Item selection is a core component in computerized adaptive testing (CAT). Several studies have evaluated new and classical selection methods; however, the few that have applied such methods to the use of polytomous items have reported conflicting results. To clarify these discrepancies and further investigate selection method properties, six…
Descriptors: Adaptive Testing, Item Analysis, Comparative Analysis, Test Items
Kirisci, Levent; Hsu, Tse-Chi – 1988
The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Item Analysis
Peer reviewed Peer reviewed
Wang, Tianyou; Kolen, Michael J. – Journal of Educational Measurement, 2001
Reviews research literature on comparability issues in computerized adaptive testing (CAT) and synthesizes issues specific to comparability and test security. Develops a framework for evaluating comparability that contains three categories of criteria: (1) validity; (2) psychometric property/reliability; and (3) statistical assumption/test…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Criteria
Kalisch, Stanley James, Jr. – 1974
The four purposes of this study were: (1) To compare two versions of a tailored testing model similar to one suggested by Kalisch (1974); (2) To identify levels of the variables within the two versions, which produce an efficient tailored testing procedures; (3) To compare, within each version, the results obtained when employing relatively small…
Descriptors: Ability, Adaptive Testing, Branching, Comparative Analysis
Kalisch, Stanley James, Jr. – 1975
Two tailored testing models, specifying procedures by which the correctness of examinees' responses to a fixed number of test items are predicted by presenting as few items as possible to the examinee, were compared for their efficiency. The models differ in that one requires reconsideration of each prediction whenever additional information is…
Descriptors: Ability, Adaptive Testing, Branching, Comparative Analysis
PDF pending restoration PDF pending restoration
Smith, Nancy J.; And Others – 1993
Results of administering a computerized adaptive version of the Grammar, Spelling, and Punctuation Test (GSP) to students in the College of Communication at the University of Texas, Austin were studied. The computerized adaptive version (CAT) was administered for the first time in June 1992 to 35 prospective students who participated in the…
Descriptors: Academic Achievement, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Previous Page | Next Page ยป
Pages: 1  |  2