
LLMs for Evil
Data Skeptic
Threats and Potential Harm from Data Poisoning and Jailbreaking in Large Language Models
This chapter explores the potential threats of data poisoning and jailbreaking in large language models, including manipulating training data and extracting sensitive information. It emphasizes the importance of early intervention to prevent harm.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.