AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Threats and Potential Harm from Data Poisoning and Jailbreaking in Large Language Models
This chapter explores the potential threats of data poisoning and jailbreaking in large language models, including manipulating training data and extracting sensitive information. It emphasizes the importance of early intervention to prevent harm.