NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 286 to 300 of 1,052 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Chien-hwa; Chen, Cheng-ping – Electronic Journal of e-Learning, 2013
The major concerns of adaptive testing studies have concentrated on effectiveness and efficiency of the system built for the research experiments. It has been criticised that such general information has fallen short of providing qualitative descriptions regarding learning performance. Takahiro Sato of Japan proposed an analytical diagram called…
Descriptors: Foreign Countries, Adaptive Testing, Computer Assisted Testing, Feedback (Response)
Northwest Evaluation Association, 2013
While many educators expect the Common Core State Standards (CCSS) to be more rigorous than previous state standards, some wonder if the transition to CCSS and to a Common Core aligned MAP test will have an impact on their students' RIT scores or the NWEA norms. MAP assessments use a proprietary scale known as the RIT (Rasch unit) scale to measure…
Descriptors: Achievement Tests, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: Standardized Tests, Career Readiness, College Readiness, Test Validity
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua – ACT, Inc., 2012
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Descriptors: Adaptive Testing, Heuristics, Accuracy, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Journal of Educational Measurement, 2012
Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
He, Qingping – Educational Research, 2012
Background: Although on-demand testing is being increasingly used in many areas of assessment, it has not been adopted in high stakes examinations like the General Certificate of Secondary Education (GCSE) and General Certificate of Education Advanced level (GCE A level) offered by awarding organisations (AOs) in the UK. One of the major issues…
Descriptors: Foreign Countries, Secondary Education, High Stakes Tests, Time Perspective
Morgan, Deanna L.; Buckendahl, Chad W. – College Board, 2011
[Slides] presented at the annual conference of the National Council on Measurement in Education, 2011, New Orleans.
Descriptors: Cutting Scores, Computer Assisted Testing, Adaptive Testing, Common Core State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Medhanie, Amanuel G.; Dupuis, Danielle N.; LeBeau, Brandon; Harwell, Michael R.; Post, Thomas R. – Educational and Psychological Measurement, 2012
The first college mathematics course a student enrolls in is often affected by performance on a college mathematics placement test. Yet validity evidence of mathematics placement tests remains limited, even for nationally standardized placement tests, and when it is available usually consists of examining a student's subsequent performance in…
Descriptors: College Mathematics, Student Placement, Mathematics Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew D.; Smits, Niels; Kim, Wonsuk; Riley, Barth – Applied Psychological Measurement, 2012
The Center for Epidemiologic Studies-Depression (CES-D) scale is a well-known self-report instrument that is used to measure depressive symptomatology. Respondents who take the full-length version of the CES-D are administered a total of 20 items. This article investigates the use of curtailment and stochastic curtailment (SC), two sequential…
Descriptors: Measures (Individuals), Depression (Psychology), Test Length, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms"…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Shu-Ying – Applied Psychological Measurement, 2010
To date, exposure control procedures that are designed to control test overlap in computerized adaptive tests (CATs) are based on the assumption of item sharing between pairs of examinees. However, in practice, examinees may obtain test information from more than one previous test taker. This larger scope of information sharing needs to be…
Descriptors: Computer Assisted Testing, Adaptive Testing, Methods, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Wang, Shudong; Jiao, Hong; He, Wei – Online Submission, 2011
The ability estimation procedure is one of the most important components in a computerized adaptive testing (CAT) system. Currently, all CATs that provide K-12 student scores are based on the item response theory (IRT) model(s); while such application directly violates the assumption of independent sample of a person in IRT models because ability…
Descriptors: Accuracy, Computation, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Chang, Hua-Hua; Huebner, Alan – Journal of Educational Measurement, 2011
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
Descriptors: Test Items, Adaptive Testing, Computer Assisted Testing, Cognitive Tests
Pages: 1  |  ...  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  ...  |  71