Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Source
Journal of Technology,… | 10 |
Author
Cameto, Renee | 2 |
Haertel, Geneva | 2 |
Abell, Rosemary | 1 |
Almond, Patricia | 1 |
Attali, Yigal | 1 |
Barton, Karen | 1 |
Bechard, Sue | 1 |
Beddow, Peter | 1 |
Behrens, John T. | 1 |
Ben-Simon, Anat | 1 |
Bennett, Randy E. | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Evaluative | 5 |
Reports - Research | 5 |
Education Level
Higher Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Middle Schools | 1 |
Audience
Location
South Carolina | 1 |
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Almond, Patricia; Winter, Phoebe; Cameto, Renee; Russell, Michael; Sato, Edynn; Clarke-Midura, Jody; Torres, Chloe; Haertel, Geneva; Dolan, Robert; Beddow, Peter; Lazarus, Sheryl – Journal of Technology, Learning, and Assessment, 2010
This paper represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to make tests…
Descriptors: Disabilities, Inferences, Computer Assisted Testing, Alternative Assessment
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Bechard, Sue; Sheinker, Jan; Abell, Rosemary; Barton, Karen; Burling, Kelly; Camacho, Christopher; Cameto, Renee; Haertel, Geneva; Hansen, Eric; Johnstone, Chris; Kingston, Neal; Murray, Elizabeth; Parker, Caroline E.; Redfield, Doris; Tucker, Bill – Journal of Technology, Learning, and Assessment, 2010
This article represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to better understand…
Descriptors: Test Validity, Disabilities, Educational Change, Evaluation Methods
Georgiadou, Elissavet; Triantafillou, Evangelos; Economides, Anastasios A. – Journal of Technology, Learning, and Assessment, 2007
Since researchers acknowledged the several advantages of computerized adaptive testing (CAT) over traditional linear test administration, the issue of item exposure control has received increased attention. Due to CAT's underlying philosophy, particular items in the item pool may be presented too often and become overexposed, while other items are…
Descriptors: Adaptive Testing, Computer Assisted Testing, Scoring, Test Items
Kim, Do-Hong; Huynh, Huynh – Journal of Technology, Learning, and Assessment, 2007
This study examined comparability of student scores obtained from computerized and paper-and-pencil formats of the large-scale statewide end-of-course (EOC) examinations in the two subject areas of Algebra and Biology. Evidence in support of comparability of computerized and paper-based tests was sought by examining scale scores, item parameter…
Descriptors: Computer Assisted Testing, Measures (Individuals), Biology, Algebra
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Penuel, William R.; Yarnall, Louise – Journal of Technology, Learning, and Assessment, 2005
Since 2002, Project WHIRL (Wireless Handhelds In Reflection on Learning) has investigated potential uses of handheld computers in K-12 science classrooms using a teacher-involved process of software development and field trials. Te project is a three-year research and development grant from the National Science Foundation, and it is a partnership…
Descriptors: Research and Development, Elementary Secondary Education, Program Effectiveness, Teaching Methods