NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pásztor, Attila; Magyar, Andrea; Pásztor-Kovács, Anita; Rausch, Attila – Journal of Intelligence, 2022
The aims of the study were (1) to develop a domain-general computer-based assessment tool for inductive reasoning and to empirically test the theoretical models of Klauer and Christou and Papageorgiou; and (2) to develop an online game to foster inductive reasoning through mathematical content and to investigate its effectiveness. The sample was…
Descriptors: Game Based Learning, Logical Thinking, Computer Assisted Testing, Models
Bronson Hui – ProQuest LLC, 2021
Vocabulary researchers have started expanding their assessment toolbox by incorporating timed tasks and psycholinguistic instruments (e.g., priming tasks) to gain insights into lexical development (e.g., Elgort, 2011; Godfroid, 2020b; Nakata & Elgort, 2020; Vandenberghe et al., 2021). These timed sensitive and implicit word measures differ…
Descriptors: Measures (Individuals), Construct Validity, Decision Making, Vocabulary Development
Peer reviewed Peer reviewed
Direct linkDirect link
Degiorgio, Lisa – Measurement and Evaluation in Counseling and Development, 2015
Equivalency of test versions is often assumed by counselors and evaluators. This study examined two versions, paper-pencil and computer based, of the Driver Risk Inventory, a DUI/DWI (driving under the influence/driving while intoxicated) risk assessment. An overview of computer-based testing and standards for equivalency is also provided. Results…
Descriptors: Risk Assessment, Drinking, Computer Assisted Testing, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Wei, Wei; Zheng, Ying – Computer Assisted Language Learning, 2017
This research provided a comprehensive evaluation and validation of the listening section of a newly introduced computerised test, Pearson Test of English Academic (PTE Academic). PTE Academic contains 11 item types assessing academic listening skills either alone or in combination with other skills. First, task analysis helped identify skills…
Descriptors: Listening Comprehension Tests, Computer Assisted Testing, Language Tests, Construct Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ercan, Recep; Yaman, Tugba; Demir, Selcuk Besir – Journal of Education and Training Studies, 2015
The objective of this study is to develop a valid and reliable attitude scale having quality psychometric features that can measure secondary school students' attitudes towards human rights. The study group of the research is comprised by 710 6th, 7th and 8th grade students who study at 4 secondary schools in the centre of Sivas. The study group…
Descriptors: Civil Rights, Attitude Measures, Factor Analysis, Construct Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Wang, Shudong; McCall, Marty; Jiao, Hong; Harris, Gregg – Online Submission, 2012
The purposes of this study are twofold. First, to investigate the construct or factorial structure of a set of Reading and Mathematics computerized adaptive tests (CAT), "Measures of Academic Progress" (MAP), given in different states at different grades and academic terms. The second purpose is to investigate the invariance of test…
Descriptors: Construct Validity, Factor Structure, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Stricker, Lawrence J.; Oranje, Andreas H. – Language Testing, 2009
This construct validation study investigated the factor structure of the Test of English as a Foreign Language[TM] Internet-based test (TOEFL[R] iBT). An item-level confirmatory factor analysis was conducted for a test form completed by participants in a field study. A higher-order factor model was identified, with a higher-order general factor…
Descriptors: Speech Communication, Construct Validity, Factor Structure, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lohse, Barbara; Satter, Ellyn; Horacek, Tanya; Gebreselassie, Tesfayi; Oakland, Mary Jane – Journal of Nutrition Education and Behavior, 2007
Objective: Assess validity of the ecSatter Inventory (ecSI) to measure eating competence (EC). Design: Concurrent administration of ecSI with validated measures of eating behaviors using on-line and paper-pencil formats. Setting: The on-line survey was completed by 370 participants; 462 completed the paper version. Participants: Participants…
Descriptors: Eating Disorders, Content Validity, Construct Validity, Test Validity
Itomitsu, Masayuki – ProQuest LLC, 2009
This dissertation reports development and validation studies of a Web-based standardized test of Japanese as a foreign language (JFL), designed to measure learners' off-line grammatical and pragmatic knowledge in multiple-choice format. Targeting Japanese majors in the U.S. universities and colleges, the test is designed to explore possible…
Descriptors: Sentences, Speech Acts, Grammar, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Peer reviewed Peer reviewed
Endler, Norman S.; Parker, James D. A. – Educational and Psychological Measurement, 1990
C. Davis and M. Cowles (1989) analyzed a total trait anxiety score on the Endler Multidimensional Anxiety Scales (EMAS)--a unidimensional construct that this multidimensional measure does not assess. Data are reanalyzed using the appropriate scoring procedure for the EMAS. Subjects included 145 undergraduates in 1 of 4 testing conditions. (SLD)
Descriptors: Anxiety, Comparative Testing, Computer Assisted Testing, Construct Validity
Previous Page | Next Page »
Pages: 1  |  2