NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1475767
Record Type: Journal
Publication Date: 2025-Aug
Pages: 46
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-0049-1241
EISSN: EISSN-1552-8294
Available Date: 0000-00-00
Balancing Large Language Model Alignment and Algorithmic Fidelity in Social Science Research
Sociological Methods & Research, v54 n3 p1110-1155 2025
Generative artificial intelligence (AI) has the potential to revolutionize social science research. However, researchers face the difficult challenge of choosing a specific AI model, often without social science-specific guidance. To demonstrate the importance of this choice, we present an evaluation of the effect of alignment, or human-driven modification, on the ability of large language models (LLMs) to simulate the attitudes of human populations (sometimes called "silicon sampling"). We benchmark aligned and unaligned versions of six open-source LLMs against each other and compare them to similar responses by humans. Our results suggest that model alignment impacts output in predictable ways, with implications for prompting, task completion, and the substantive content of LLM-based results. We conclude that researchers must be aware of the complex ways in which model training affects their research and carefully consider model choice for each project. We discuss future steps to improve how social scientists work with generative AI tools.
SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: 1Computer Science, Brigham Young University, Provo, UT, USA; 2Political Science, Brigham Young University, Provo, UT, USA