Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
Journal of Technology,… | 6 |
Author
Lin, Chuan-Ju | 2 |
Alves, Cecila | 1 |
Behrens, John T. | 1 |
Bennett, Randy E. | 1 |
Demark, Sarah F. | 1 |
Frezzo, Dennis C. | 1 |
Gierl, Mark J. | 1 |
Gifford, Bernard | 1 |
Levy, Roy | 1 |
Mislevy, Robert J. | 1 |
Pommerich, Mary | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Descriptive | 3 |
Reports - Research | 3 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lin, Chuan-Ju – Journal of Technology, Learning, and Assessment, 2010
Assembling equivalent test forms with minimal test overlap across forms is important in ensuring test security. Chen and Lei (2009) suggested a exposure control technique to control test overlap-ordered item pooling on the fly based on the essence that test overlap rate--ordered item pooling for the first t examinees is a function of test overlap…
Descriptors: Test Length, Test Format, Evaluation Criteria, Psychometrics
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Gierl, Mark J.; Zhou, Jiawen; Alves, Cecila – Journal of Technology, Learning, and Assessment, 2008
An item model serves as an explicit representation of the variables in an assessment task. An item model includes the "stem", "options", and "auxiliary information". The "stem" is the part of an item which formulates context, content, and/or the question the examinee is required to answer. The "options" contain the alternative answers with one…
Descriptors: Classification, Test Items, Models, Test Construction
Lin, Chuan-Ju – Journal of Technology, Learning, and Assessment, 2008
The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…
Descriptors: Item Response Theory, Test Theory, Comparative Analysis, Computer Assisted Testing
Pommerich, Mary – Journal of Technology, Learning, and Assessment, 2004
As testing moves from paper-and-pencil administration toward computerized administration, how to present tests on a computer screen becomes an important concern. Of particular concern are tests that contain necessary information that cannot be displayed on screen all at once for an item. Ideally, the method of presentation should not interfere…
Descriptors: Test Content, Computer Assisted Testing, Multiple Choice Tests, Computer Interfaces
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification