AGENTS / GITHUB / Awesome-LLM-in-Social-Science
githubinferredactive

Awesome-LLM-in-Social-Science

provenance:github:ValueByte-AI/Awesome-LLM-in-Social-Science
WHAT THIS AGENT DOES

This agent, called Awesome-LLM-in-Social-Science, is a curated collection of research about how artificial intelligence can be used to understand and improve social science fields like psychology. It helps researchers explore topics like values, personality, and opinions, and find ways to use AI to conduct better research and build more helpful tools. Social scientists, researchers, and anyone interested in the intersection of AI and human behavior would find this resource valuable.

View Source ↗First seen 2y agoNot yet hireable
README
# Awesome-LLM-in-Social-Science



> **🔗 Recommended Resource:**  
> Check out [Awesome-LLM-Psychometrics](https://github.com/ValueByte-AI/Awesome-LLM-Psychometrics) for a comprehensive collection of papers and resources on LLM psychometrics, including evaluation, validation, and enhancement.  
>  




Below we compile *awesome* papers that  
- **evaluate** Large Language Models (LLMs) from a perspective of Social Science.
- **align** LLMs from a perspective of Social Science.
- employ LLMs to **facilitate research, address issues, and enhance tools** in Social Science.
- contribute **surveys**, **perspectives**, and **datasets** on the above topics.

The above taxonomies are by no means orthogonal. For example, evaluations require simulations. We categorize these papers based on our understanding of their focus. This collection has **a special focus on Psychology and intrinsic values**.

Welcome to contribute and discuss!

---

🤩 Papers marked with a ⭐️ are contributed by the maintainers of this repository. If you find them useful, we would greatly appreciate it if you could give the repository a star and cite our paper.


```bibtex
@article{ye2025large,
  title={Large Language Model Psychometrics: A Systematic Review of Evaluation, Validation, and Enhancement},
  author={Ye, Haoran and Jin, Jing and Xie, Yuhang and Zhang, Xin and Song, Guojie},
  journal={arXiv preprint arXiv:2505.08245},
  year={2025},
  note={Project website: \url{https://llm-psychometrics.com}, GitHub: \url{https://github.com/ValueByte-AI/Awesome-LLM-Psychometrics}}
}
```


---

## Table of Contents

* 1. [📚 Survey](#Survey)
* 2. [🗂️ Dataset](#Dataset)
* 3. [🔎 Evaluating LLM](#EvaluatingLLM)
	* 3.1. [❤️ Value](#Value)
	* 3.2. [🩷 Personality](#Personality)
	* 3.3. [🔞 Morality](#Morality)
	* 3.4. [🎤 Opinion](#Opinion)
	* 3.5. [💚 General Preference](#GeneralPreference)
	* 3.6. [🧠 Ability](#Ability)
	* 3.7. [⚠️ Risk](#Risk) 
* 4. [⚒️ Tool enhancement](#Toolenhancement)
* 5. [⛑️ Alignment](#Alignment)
	* 5.1. [🌈 Pluralistic Alignment](#PluralisticAlignment)
* 6. [🚀 Simulation](#Simulation)
* 7. [👁️‍🗨️ Perspective and Position](#Perspective)


---

##  1. <a name='Survey'></a>📚 Survey 
- ⭐️ **Large Language Model Psychometrics: A Systematic Review of Evaluation, Validation, and Enhancement**, 2025.05, [[paper]](https://arxiv.org/abs/2505.08245).
- **Missing the Margins: A Systematic Literature Review on the Demographic Representativeness of LLMs**, ACL 2025, [[paper]](https://aclanthology.org/2025.findings-acl.1246/), [[collection]](https://github.com/Indiiigo/LLM_rep_review).
- **Large language models (LLM) in computational social science: prospects, current state, and challenges**, 2025.03, Social Network Analysis and Mining, [[paper]](https://link.springer.com/article/10.1007/s13278-025-01428-9).
- **Towards Scientific Intelligence: A Survey of LLM-based Scientific Agents**, 2025.03, [[paper]](https://arxiv.org/abs/2503.24047).
- **On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective**, 2025.02, [[paper]](https://arxiv.org/abs/2502.14296).
- **The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment**, 2024.12, [[paper]](https://arxiv.org/abs/2412.16468).
- **Large Language Model Safety: A Holistic Survey**, 2024.12, [[paper]](https://arxiv.org/abs/2412.17686).
- **Political-LLM: Large Language Models in Political Science**, 2024.12, [[paper]](https://arxiv.org/abs/2412.06864), [[website]](https://political-llm.org/).
- **LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods**, 2024.12, [[paper]](https://arxiv.org/abs/2412.05579).
- **From Individual to Society: A Survey on Social Simulation Driven by Large Language Model-based Agents**, 2024.12, [[paper]](https://arxiv.org/abs/2412.03563), [[repo]](https://github.com/FudanDISC/SocialAgent).
- **A Survey on Human-Centric LLMs**, 2024.11, [[paper]](https://arxiv.org/abs/2411.14491).
- **Survey of Cultural Awareness in Language Models: Text and Beyond**, 2024.11, [[paper]](https://arxiv.org/pdf/2411.00860).
- **How developments in natural language processing help us in understanding human behaviour**, 2024.10 Nature Human Behavior, [[paper]](https://www.nature.com/articles/s41562-024-01938-0.pdf).
- **How large language models can reshape collective intelligence**, 2024.09, Nature Human Behavior, [[paper]](https://www.nature.com/articles/s41562-024-01959-9).
- **Automated Mining of Structured Knowledge from Text in the Era of Large Language Models**, 2024.08, KDD 2024, [[paper]](https://dl.acm.org/doi/pdf/10.1145/3637528.3671469).
- **Affective Computing in the Era of Large Language Models: A Survey from the NLP Perspective**, 2024.07, [[paper]](https://arxiv.org/abs/2408.04638).
- **Perils and opportunities in using large language models in psychological research**, 2024.07, [[paper]](https://academic.oup.com/pnasnexus/article/3/7/pgae245/7712371).
- **The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models**, 2024.06, [[paper]](https://arxiv.org/abs/2406.11096).
- **Can Generative AI improve social science?**, 2024.05, PNAS, [[paper]](https://www.pnas.org/doi/pdf/10.1073/pnas.2314021121).
- **Foundational Challenges in Assuring Alignment and Safety of Large Language Models**, 2024.04, [[paper]](https://arxiv.org/abs/2404.09932).
- **Large Language Model based Multi-Agents: A Survey of Progress and Challenges**, 2024.01, [[paper]](https://arxiv.org/abs/2402.01680), [[repo]](https://github.com/taichengguo/LLM_MultiAgents_Survey_Papers).
- **The Rise and Potential of Large Language Model Based Agents: A Survey**, 2023, [[paper]](https://arxiv.org/abs/2309.07864), [[repo]](https://github.com/WooooDyy/LLM-Agent-Paper-List).
- **A Survey on Large Language Model based Autonomous Agents**, 2023, [[paper]](https://arxiv.org/abs/2308.11432), [[repo]](https://github.com/Paitesanshi/LLM-Agent-Survey).
- **AI Alignment: A Comprehensive Survey**, 2023.11, [[paper]](https://arxiv.org/abs/2310.19852), [[website]](https://alignmentsurvey.com/).
- **Aligning Large Language Models with Human: A Survey**, 2023, [[paper]](https://arxiv.org/abs/2307.12966), [[repo]](https://github.com/GaryYufei/AlignLLMHumanSurvey).
- **Large Language Model Alignment: A Survey**, 2023, [[paper]](https://arxiv.org/abs/2309.15025).
- **Large Language Models Empowered Agent-based Modeling and Simulation: A Survey and Perspectives**, 2023.12, Nature humanities and social sciences communications, [[paper]](https://arxiv.org/abs/2312.11970).
- **A Survey on Evaluation of Large Language Models**, 2023.07, [[paper]](https://arxiv.org/abs/2307.03109), [[repo]](https://github.com/MLGroupJLU/LLM-eval-survey).
- **From Instructions to Intrinsic Human Values -- A Survey of Alignment Goals for Big Models**, 2023.08, [[paper]](https://arxiv.org/abs/2308.12014), [[repo]](https://github.com/ValueCompass/Alignment-Goal-Survey).
- **Concerns on the use of generative AI in social science research**, [[website]](https://uh-dcm.github.io/genai-concerns/), [[repo]](https://github.com/uh-dcm/genai-concerns/)


##  2. <a name='Dataset'></a>🗂️ Dataset
- **[AI Job Displacement Tracker](https://github.com/noahaust2/ai-displacement-tracker)** — Structured, source-backed dataset tracking 96 AI-attributed workforce reductions (457K workers affected, 13 countries, 13 sectors). Every entry includes source URLs, attribution tier, and job functions. Available in JSON/CSV under CC-BY-4.0.
- **[Mental Health Datasets](https://github.com/kharrigian/mental-health-datasets)**
- **[Datasets for depression detection using data posted on online platforms](https://github.com/bucuram/depression-datasets-nlp)**
- **Benchmarking Multi-National Value Alignment for Large Language Models**, 2025.04, [[paper]](https://arxiv.org/abs/2504.12911).
- **COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values**, 2025.04, [[paper]](https://a

[truncated…]

PUBLIC HISTORY

First discoveredMar 21, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenOct 15, 2023
last updatedMar 18, 2026
last crawled26 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:ValueByte-AI/Awesome-LLM-in-Social-Science)