I think with Lama 2, we've seen that sometimes our average edge half can go wrong in terms of being too tight. I mean, opening is under a lot of pressure, unlike the safety and all the instruction side. The best thing to do would be, hey, let's version lock the model and keep doing evil seconds each other. Like doing an evil today and an evil like that was like a year ago,. There might be like 20 versions in between that you don't even know how the model has changed.
Today NLW is joined by Swyx and Alessio, the hosts of the Latent Space podcast to discuss the key technical developments from the last month of AI, including code interpreter; llama 2; the latest in AI agents; growing interest in AI companions, and more.
Latent Space podcast -https://www.latent.space/podcast / https://twitter.com/latentspacepod
Swyx - https://twitter.com/swyx
Alessio Fanelli - https://twitter.com/FanaHOVA
ABOUT THE AI BREAKDOWN
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Join the community: bit.ly/aibreakdown
Learn more: http://breakdown.network/
Twitter: https://twitter.com/nlw / https://twitter.com/AIBreakdownPod