| 08:30 |
Arrival and Coffee |
- |
| 09:00 |
Session III: Anthropomorphism |
|
| 09:00 |
Large Language Models and the Dynamics of Affective Connotations: Estimating Meaning Inside and Outside of Event Contexts in the United States, Germany, and France |
Aidan Combs |
| 09:10 |
Testing the text-as-human model hypothesis |
Lukas Seiling |
| 09:20 |
Understanding how large language models (LLMs) make moral judgments in collective setting |
Anita Keshmirian |
| 09:30 |
Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language Models |
Lujain Ibrahim |
| 09:40 |
Shared Discussion |
- |
| 10:30 |
Break |
- |
| 11:00 |
Keynote II |
Zeerak Talat |
| 12:00 |
Lunch: Banh Mi Sandwiches |
- |
| 13:00 |
Session IV: Language Models as Research Tools |
|
| 13:00 |
Computational ideal point estimation using textual data: a review of supervised and semi-supervised algorithms |
Patrick Parschan |
| 13:10 |
LLMs in the Literature Lab: Using Generative AI to Decode Its Own Disruption |
Esther Görnemann |
| 13:20 |
Total Error Framework for LLM-based Survey Simulations |
Şükrü Atsızelti |
| 13:30 |
From Confidence to Collapse in LLM Factual Robustness |
Alina Fastowski |
| 13:40 |
AI Narratives in Frontier Development: A Bootstrapped LLM Approach to Mapping Labor-Augmenting and Automating Narratives in Conference Papers |
Johanna Barop, Melle Mendikowski |
| 14:00 |
Shared Discussion |
- |
| 14:30 |
Coffee Break |
- |
| 15:00 |
Highlight Talks (Posters give 3 min oral pitch) |
|
| |
1. A Socio-Technical Approach to Auditing, Risk Management, and Alignment of Language Models in Hiring Systems. |
Shruti Kakade |
| |
2. Navigating Representation: Utilizing Prompt Engineering to Minimize Representation Harms in Journalist’s Image Captions. |
Habiba Sarhan |
| |
3. Evaluating Text-to-Speech Technology through Informativity-Driven Acoustic Reduction. |
Anna Taylor |
| |
4. From Annotation to Audit: Investigating LLMs for Systemic Risk Evaluation of Political Ads. |
Marie-Therese Sekwenz |
| |
5. A Holistic Turing Test for Personality-Injected Large Language Models. |
Zsófia Hajnal |
| |
6. Narration as Functions: from Events to Narratives. |
Junbo Huang |
| |
7. Deploying DistilBERT on the Jigsaw dataset to detect and mitigate information bias using FHI365 fairness model. |
Nima Thing |
| |
8. On the role of quality assurance in LLM-based annotation tasks. |
LK Seiling , Yangliu Fan |
| |
9. Biasly: An Expert-Annotated Dataset for Subtle Misogyny Detection and Mitigation. |
Anna Richter |
| |
10. Addressing Systematic Non-response Bias with Supervised Fine-Tuning of Large Language Models: A Case Study on German Voting Behaviour. |
Tobias Holtdirk |
| |
11. LLMs as Social Science Tools: Mapping Model Architecture to Methodological Appropriateness. |
Daniele Barolo |
| |
12. Dynamic Claim Generation and Synthetic Augmentation: A Benchmarking Framework for evaluating Search-Enabled LLMs in Fact-Checking |
Ruggero Marino Lazzaroni |
| 16:00 |
Networking and Poster Session |
- |
| 17:00 |
Closing and Final Remarks |
- |