Class 8

Risks of generative ai

Before you start:

πŸ“ Complete the pre-class exercise. [60 min]Β 

πŸ“‚ Download the class slides here.

1. Introduction and risks overview

In this video, we start exploring practical examples of four broad categories of risk: known limitations (from bias to lack of privacy and disinformation), misuse (to challenge an LLM's alignment), society-wide disruptions (from labor displacement to personal relationships) and existential risks.

πŸ“ Your turn! try to get a chatbot of your choice to produce false information. Was it hard?

2. Mitigation strategies

From specific strategies to counter misinformation to broader regulatory levers such as disclosure, registration, licensing and auditing, in this video we explore different strategies to mitigate this risks we just discussed.

πŸ“ Which of the strategies mentioned (banning deep fakes, requiring disclosures and watermarks, detection tools, content verification systems and eduction) are more effective, feasible and supported?

3. Case study: Generative AI in Healthcare

From AlphaFold to biomedical research, diagnostics, care seeking and productivity, in this video we start by exploring some of the potential benefits of generative AI in this space, and later move to evaluating the risks.Β 

πŸ“Β  Focus on two specific harms: inaccuracy/bias, and overeliance. Come up with 2-3 specific proposals that can help mitigate these risks.

4. Key Takeaways

In this video, we wrap our discussion by reviewing some of the mitigation proposals: constitutional AI, due diligence, AIs that monitor AIs, legal liability and attention checks for human.Β