src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-8050569412065003" crossorigin="anonymous">[/script]

Lessons learned: Using AI can help groups find common ground on polarizing topics newsthirst.


You’re reading Lessons Learned, which distills practical takeaways from standout campaigns and peer-reviewed research in health and science communication. Want more Lessons Learned? Subscribe to our Call to Action newsletter.

Successful group action—whether within families, workplaces, or democratic decision-making—relies on exchanging ideas and building common ground. A team of researchers from Google, Harvard, Yale, and Oxford recently explored how AI could facilitate consensus in politically polarizing discussions. They recruited 5,734 participants in the UK for a series of experiments on topics such as immigration, climate change, and universal childcare. In these discussions, participants first submitted individual opinions, which were then synthesized into group statements by a generative AI model called the Habermas Machine. Participants ranked these statements, provided critiques, and saw revised versions—again generated by AI—before making a final ranking and selecting a winner. To evaluate the model’s effectiveness with a more representative group, the researchers also tested the model in a virtual citizens’ assembly with 200 UK participants. The group was selected to reflect the country’s demographics and discussed the same topics.

What they learned: Participants preferred AI-generated group statements to human-written ones in 56% of cases. Notably, the AI statements were judged to be clearer and better at presenting the majority position. However, the AI and human statements were not significantly different in their inclusion of the minority position. Additionally, AI-mediated discussions reduced group disagreement while unmediated discussions did not. These results were consistently found across the initial experiments and the citizens’ assembly. 

Why it matters: This experiment is the first to demonstrate how AI could help humans find common ground in a way that is scalable, fair, and keeps human decision making central.

➡️ Idea worth stealing: Consider using AI to facilitate discussions that build common ground. This could mirror the process this team took:

  1. Members of the group write their individual opinions
  2. AI writes several statements that reflect the opinions of the group
  3. The group ranks the statements
  4. Members of the group write critiques of the winning statement
  5. AI writes several revised statements
  6. The group ranks the revised statements and picks a winning statement

What to watch: How communicators continue to define the ethical boundaries of using AI to support their work. If you already use AI in your work, or are considering using AI, be sure to check out ARTT’s proposed principles for AI usage in public communications.  


Last Updated

Get the latest public health news

Stay connected with Harvard Chan School


Leave a Reply

Your email address will not be published. Required fields are marked *