Social Science and Language Models
Methods and theory to responsible research on and with Language technologies
In recent years, language models have seen improved performance in tasks like translation, sorting, and text generation, which has led to their integration into a variety of fields, such as medical contexts, software engineering but also social science. Parallel to this technological proliferation, the emerging field of Responsible AI research has revealed various socio-technical biases in language models which result in discrimination based on attributes such as ethnicity, gender, and more. These findings force both social scientists and computer scientists who are integrating these tools into their research, to reflect how they can detect and mitigate potentially biased outcomes. By doing so, they contribute to an expanding body of literature that critiques how discrimination is conceptualized, how bias measurements are operationalized, and how existing bias benchmarks are constructed. These issues stem from a lack of genuine interdisciplinary collaboration between NLP researchers and researchers from various social science disciplines.
This hybrid workshop is meant to provide the space for interdisciplinary exchange toward responsible research on and with language models.
Event Details
Date: 03./04.04.2025
Deadline for Abstract submission: 02.03.2025
Location: Weizenbaum Institut and hybrid
Registration to attend the workshop Registration Form
Speakers
Zeerak Talat

Zeerak is a Chancellor’s Fellow (~Assistant Professor in the U.S.) in Responsible Machine Learning and Artificial Intelligence at the Centre for Technomoral Futures and the School of Informatics, where they are a member of the Institute for Language, Cognition and Computation, at the University of Edinburgh. Zeerak is one of the 2024 visiting research fellows at HIIG, and have recently joined DAIR (Distributed AI Research Institute) as a faculty fellow. They work on the intersection between machine learning, science and technology studies, and media studies. Zeerak’s research seeks to examine how machine learning systems interact with our societies and the downstream effects of introducing machine learning to our society.
Link to website: https://zeerak.org
Flor Miriam Plaza del Arco

Flor is a postdoctoral researcher at Bocconi University’s MilaNLP lab in Milan, Italy. Her research lies at the intersection of language, computation, and society. She investigates how large language models represent and interpret human emotions, specifically exploring whether these models perpetuate biases, stereotypes, or harmful language across different cultural and social contexts. She fosters fairness and cultural sensitivity in AI systems and collaborates with social science experts to provide a comprehensive perspective on these challenges.
In January 2023, she completed her Ph.D. with highest honors (summa cum laude) from the SINAI Lab at the University of Jaén (Spain). Her research advanced hate speech detection and emotion identification through the development of various corpora and lexicons, as well as by enhancing the performance of large language models, particularly for Spanish.
She actively contributes to the academic community by co-organizing notable events, including the 8th Workshop on Online Abuse and Harms at NAACL 2024 and the Tutorial on Countering Hateful and Offensive Speech Online at EMNLP 2024. Additionally, she co-organized the 36th, 37th, and 39th editions of the Spanish Society for Natural Language Processing Conference (SEPLN). She is currently co-organizing the 9th Workshop on Online Abuse and Harms at ACL 2025
Link to website: https://fmplaza.github.io