Class 11

Mis/disinformation

Before you start:

πŸ“ Complete the pre-class exercise. [45 min]Β 

πŸ“‚ Download the class slides here.

1. IntroductionΒ 

In this session, we explore the tension between harmfulness and harmlessness in AI, and between free expression and safety. We focus particularly on misinformation, or the act of providing incorrect, inaccurate or decontextualized information, and disinformation, the act of creating and spreading false information with the intent to deceive or mislead.

2. Misinformation

In this video we explore the reasons and the main risks connected to these "hallucinations", or the phenomenon of LLMs creating outputs that are nonsensical or inaccurate. We will talk about the curse of recursion and model collapse, and we will discuss some possible ways to reduce AI-generated misinformation, including mixture of experts (MoE) approaches.Β 

3. Disinformation

Disinformation pre-dates generative AI, so what is new now? First, AI can make the act of creating disinformation content much easier and scalable, especially using toxic AIs. Also, AI is extremely good at misleading recipients and creating customized information to make it even more persuadable. Finally, the so called "liar's dividend" shows that in a world were AI content is so accessible, users tend to doubt everything they see online.

4. Combatting disinformation

What can different players do to reduce the harms of disinformation? AI companies can implement detection tools, standards for watermarking and model guardrails. Social media companies can require labelling, rate reliable news sources and promote their content. Governments can create regulatory bodies and develop frameworks for safe AI and liability. Civil society can develop media literacy, support local journalism and advocate for democratic efforts.