NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Aybek, Eren Can – Journal of Applied Testing Technology, 2021
The study aims to introduce catIRT tools which facilitates researchers' Item Response Theory (IRT) and Computerized Adaptive Testing (CAT) simulations. catIRT tools provides an interface for mirt and catR packages through the shiny package in R. Through this interface, researchers can apply IRT calibration and CAT simulations although they do not…
Descriptors: Item Response Theory, Computer Assisted Testing, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Kirk A.; Kao, Shu-chuan – Journal of Applied Testing Technology, 2022
Natural Language Processing (NLP) offers methods for understanding and quantifying the similarity between written documents. Within the testing industry these methods have been used for automatic item generation, automated scoring of text and speech, modeling item characteristics, automatic question answering, machine translation, and automated…
Descriptors: Item Banks, Natural Language Processing, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Mead, Alan D.; Zhou, Chenxuan – Journal of Applied Testing Technology, 2022
This study fit a Naïve Bayesian classifier to the words of exam items to predict the Bloom's taxonomy level of the items. We addressed five research questions, showing that reasonably good prediction of Bloom's level was possible, but accuracy varies across levels. In our study, performance for Level 2 was poor (Level 2 items were misclassified…
Descriptors: Artificial Intelligence, Prediction, Taxonomy, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wolkowitz, Amanda A.; Foley, Brett P.; Zurn, Jared – Journal of Applied Testing Technology, 2021
As assessments move from traditional paper-pencil administration to computer-based administration, many testing programs are incorporating alternative item types (AITs) into assessments with the goals of measuring higher-order thinking, offering insight into problem-solving, and representing authentic real-world tasks. This paper explores multiple…
Descriptors: Psychometrics, Alternative Assessment, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Laughlin Davis, Laurie; Morrison, Kristin; Zhou-Yile Schnieders, Joyce; Marsh, Benjamin – Journal of Applied Testing Technology, 2021
With the shift to next generation digital assessments, increased attention has focused on Technology-Enhanced Assessments and Items (TEIs). This study evaluated the feasibility of a high-fidelity digital assessment item response format, which allows students to solve mathematics questions on a tablet using a digital pen. This digital ink approach…
Descriptors: Computer Assisted Testing, Mathematics Instruction, Technology Uses in Education, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Jorion, Natalie; Dickison, Philip – Journal of Applied Testing Technology, 2019
Clinical judgment has become an increasingly important aspect of modern health service professionals. To ensure public safety, licensure exams must go beyond assessing only knowledge and skills when evaluating entry-level professions to evaluating clinical judgment. This importance necessitates licensure and certification examinations in these…
Descriptors: Decision Making, Licensing Examinations (Professions), Certification, Nursing Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wolkowitz, Amanda A.; Davis-Becker, Susan L.; Gerrow, Jack D. – Journal of Applied Testing Technology, 2016
The purpose of this study was to investigate the impact of a cheating prevention strategy employed for a professional credentialing exam that involved releasing over 7,000 active and retired exam items. This study evaluated: 1) If any significant differences existed between examinee performance on released versus non-released items; 2) If item…
Descriptors: Cheating, Test Content, Test Items, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis; Hogan, James B.; Matovinovic, Donna – Journal of Applied Testing Technology, 2015
The demand for test items far outstrips the current supply. This increased demand can be attributed, in part, to the transition to computerized testing, but, it is also linked to dramatic changes in how 21st century educational assessments are designed and administered. One way to address this growing demand is with automatic item generation.…
Descriptors: Common Core State Standards, Test Items, Alignment (Education), Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Burke, Matthew; Devore, Richard; Stopek, Josh – Journal of Applied Testing Technology, 2013
This paper describes efforts to bring principled assessment design to a large-scale, high-stakes licensure examination by employing the frameworks of Assessment Engineering (AE), the Revised Bloom's Taxonomy (RBT), and Cognitive Task Analysis (CTA). The Uniform CPA Examination is practice-oriented and focuses on the skills of accounting. In…
Descriptors: Licensing Examinations (Professions), Accounting, Engineering, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms"…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Makransky, Guido; Glas, Cees A. W. – Journal of Applied Testing Technology, 2010
An accurately calibrated item bank is essential for a valid computerized adaptive test. However, in some settings, such as occupational testing, there is limited access to test takers for calibration. As a result of the limited access to possible test takers, collecting data to accurately calibrate an item bank in an occupational setting is…
Descriptors: Foreign Countries, Simulation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kingsbury, G. Gage; Wise, Steven L. – Journal of Applied Testing Technology, 2011
Development of adaptive tests used in K-12 settings requires the creation of stable measurement scales to measure the growth of individual students from one grade to the next, and to measure change in groups from one year to the next. Accountability systems like No Child Left Behind require stable measurement scales so that accountability has…
Descriptors: Elementary Secondary Education, Adaptive Testing, Academic Achievement, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Laitusis, Cara Cahalan; Maneckshana, Behroz; Monfils, Lora; Ahlgrim-Delzell, Lynn – Journal of Applied Testing Technology, 2009
The purpose of this study was to examine Differential Item Functioning (DIF) by disability groups on an on-demand performance assessment for students with severe cognitive impairments. Researchers examined the presence of DIF for two comparisons. One comparison involved students with severe cognitive impairments who served as the reference group…
Descriptors: Test Bias, Test Items, Autism, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics