Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 28 |
Descriptor
Computer Assisted Testing | 31 |
Correlation | 31 |
Regression (Statistics) | 31 |
English (Second Language) | 11 |
Scores | 10 |
Second Language Learning | 10 |
Comparative Analysis | 8 |
Foreign Countries | 8 |
Language Tests | 8 |
Models | 8 |
Scoring | 7 |
More ▼ |
Source
Author
Attali, Yigal | 3 |
Breyer, F. Jay | 2 |
Crossley, Scott | 2 |
McNamara, Danielle | 2 |
Sinharay, Sandip | 2 |
Atkins, Andrew | 1 |
Baldwin, Peter | 1 |
Barnes, Tiffany, Ed. | 1 |
Braude, Eric John | 1 |
Brehm, Laurel | 1 |
Brenneman, Meghan | 1 |
More ▼ |
Publication Type
Reports - Research | 28 |
Journal Articles | 26 |
Tests/Questionnaires | 3 |
Speeches/Meeting Papers | 2 |
Collected Works - Proceedings | 1 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 13 |
Postsecondary Education | 12 |
Elementary Education | 5 |
Middle Schools | 4 |
Grade 5 | 3 |
Grade 8 | 3 |
Grade 4 | 2 |
Intermediate Grades | 2 |
Junior High Schools | 2 |
Secondary Education | 2 |
Adult Education | 1 |
More ▼ |
Audience
Location
California | 2 |
Florida | 2 |
Germany | 2 |
Massachusetts | 2 |
Australia | 1 |
California (Los Angeles) | 1 |
Czech Republic | 1 |
Europe | 1 |
Hungary | 1 |
Israel | 1 |
Japan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Brehm, Laurel; Goldrick, Matthew – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2017
The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., "make up" the story, "cut up the meat"). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent ("cut…
Descriptors: Verbs, Form Classes (Languages), Regression (Statistics), Error Patterns
Timpe-Laughlin, Veronika; Choi, Ikkyu – Language Assessment Quarterly, 2017
Pragmatics has been a key component of language competence frameworks. While the majority of second/foreign language (L2) pragmatics tests have targeted productive skills, the assessment of receptive pragmatic skills remains a developing field. This study explores validation evidence for a test of receptive L2 pragmatic ability called the American…
Descriptors: Pragmatics, Language Tests, Test Validity, Receptive Language
Zsolnai, Anikó; Kasik, László – International Journal of School & Educational Psychology, 2017
The aim of our cross-sectional investigation was to explore prosocial behavior at the ages of 9, 11, and 13, and to reveal associations between this social behavior and some background variables such as age, gender, and parents' educational attainment. The participants were 185 Hungarian students and their teachers. Two Likert-type questionnaires…
Descriptors: Foreign Countries, Computer Assisted Testing, Child Behavior, Prosocial Behavior
Cutumisu, Maria – International Association for Development of the Information Society, 2017
This paper examines the impact of the informational value of feedback choices on students' performance, their choice to revise, and the time they spend designing posters and reading feedback in an assessment game. Choices to seek confirmatory or critical feedback and to revise posters in a poster design task were collected from a hundred and six…
Descriptors: Feedback (Response), Value Judgment, Evaluation Methods, Grade 8
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Harik, Polina; Baldwin, Peter; Clauser, Brian – Applied Psychological Measurement, 2013
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…
Descriptors: Computer Assisted Testing, Automation, Scoring, Comparative Analysis
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Spivey, Michael F.; McMillan, Jeffrey J. – Journal of Education for Business, 2014
The authors examined students' effort and performance using online versus traditional classroom testing procedures. The instructor and instructional methodology were the same in different sections of an introductory finance class. Only the procedure in which students were tested--online versus in the classroom--differed. The authors measured…
Descriptors: Computer Assisted Testing, Evaluation Methods, Introductory Courses, Tests
Hoang, Giang Thi Linh; Kunnan, Antony John – Language Assessment Quarterly, 2016
Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or…
Descriptors: Case Studies, Essays, Writing Evaluation, English (Second Language)
Hervey, Aaron S.; Greenfield, Kathryn; Gualtieri, C. Thomas – Journal of Genetic Psychology, 2012
There is overwhelming evidence of genetic influence on cognition. The effect is seen in general cognitive ability, as well as in specific cognitive domains. A conventional assessment approach using face-to-face paper and pencil testing is difficult for large-scale studies. Computerized neurocognitive testing is a suitable alternative. A total of…
Descriptors: Cognitive Ability, Testing, Parents, Preschool Children
Mueller, Shane T.; Perelman, Brandon S.; Tan, Yin Yin; Thanasuan, Kejkaew – Journal of Problem Solving, 2015
The traveling salesman problem (TSP) is a combinatorial optimization problem that requires finding the shortest path through a set of points ("cities") that returns to the starting point. Because humans provide heuristic near-optimal solutions to Euclidean versions of the problem, it has sometimes been used to investigate human visual…
Descriptors: Sales Occupations, Salesmanship, Computer System Design, Computer Software Reviews
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Einig, Sandra – Accounting Education, 2013
This paper investigates the impact of online multiple choice questions (MCQs) on students' learning in an undergraduate Accounting module at a British university. The impact is considered from three perspectives: an analysis of how students use the MCQs; students' perceptions expressed in a questionnaire survey; and an investigation of the…
Descriptors: Electronic Learning, Blended Learning, Computer Assisted Testing, Formative Evaluation
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills