We're in like the perfect goldilocks zone right now and we just got here. We have models that are really capable that can do amazing stuff for us but at the same time it seems like we don't really know what goes on inside the models very well. So i kind of feel from that that it would be wise to kind of stop here not rush to scale up to another hundred x computer.
This is a special preview episode of The Cognitive Revolution: How AI Changes Everything. Hosted by Erik Torenberg and Nathan Labenz, TCR hosts in-depth interviews with the creators, builders and thinkers pushing the bleeding edge of AI. On this episode, they talk with Riley Goodside, the first Staff Prompt Engineer at Scale AI and expert in prompting LLMs and integrating them into AI applications.
Check out The Cognitive Revolution The perfect AI interview complement to The AI Breakdown https://link.chtbl.com/TheCognitiveRevolution Find TCR on YouTube: https://www.youtube.com/@CognitiveRevolutionPodcast