Publication:
Acerbi et al. Large Language Models Show Human-like Content Biases in Transmission Chain Experiments. PNAS, 2023
Task: Reproducing and understanding the results regarding content biases in LLMs as observed in transmission chain experiments
Libraries: openai, rpy2, python-docx, matplotlib, pandas
Series Overview: This series includes multiple detailed notebooks, guiding you through the process of setting up, executing, and analyzing transmission chain experiments with LLMs to explore human-like content biases. Each notebook corresponds to one of the six plots in the main paper and includes three sections:
- Authors’ Results - Reproduces the paper’s findings using Python (originally conducted in R).
- Authors’ Summaries / GPT-4 Evaluation - Evaluates summaries from the original study using GPT-4.
- New Summaries / GPT-4 Evaluation - Uses new summaries from updated experiments and evaluates them using GPT-4.