
 Generative AI in the Real World
 Generative AI in the Real World Andrew Ng on where AI is headed. It’s about agents.
 Aug 19, 2025 
 27:39 
Andrew Ng is one of the pioneers of modern AI. He was Google Brain’s founding technical lead, Coursera’s founder, Baidu’s Chief Scientist, DeepLearning.ai’s founder, a Professor at Stanford—and much more. Andrew talks with Ben Lorica about scaling AI, agents, the future of open source AI, and openness among AI researchers. Have you experienced an “agentic moment” when you’re surprised and thrilled by AI’s ability to generate a plan and then to enact that plan? You will.
Points of interest
- 0:00: Introduction
- 1:00: Advancing AI required scaling up. Better algorithms weren’t the issue.
- 2:57: Just as we needed GPUs and other new hardware for training, we may need new hardware for inference.
- 3:18: People are pushing Data-centric AI forward. Engineering the data is important—maybe even more important than engineering the model.
- 4:41: The idea of agents has been around for a while. What’s new here?
- 6:00: Agentic workflows let AI work iteratively, which yields a huge improvement in performance.
- 8:01: Agent can be used for Robotic Process Automation (RPA), but it’s much bigger than that. We will experience “agentic moments” when we see AI that plans and executes a task without human intervention.
- 10:42: Do you anticipate new Agentic applications that weren’t possible before?
- 12:21: What are the risks of training on copyright-free datasets? Will using copyright-free datasets degrade performance?
- 15:05: AI is a tool; I dispatch it to do things for me. I don’t see it as a different “species.”
- 16:17: How do we know when an application is ready to release? What are best practices for enterprise use?
- 17:18: It’s still very early. We need more work on evaluation. It’s easy to build applications—but when you build an app in a week, it’s hard to spend 10 weeks evaluating it.
- 19:14: A lot of people build an application on one LLM, but won’t switch because evaluation is hard.
- 20:12: Are you concerned that Meta is the only consistent supplier of open source language models?
- 22:10: The cost of training is falling. The decrease in the cost of training means that the ability to train large models will become open to more players.
- 26:15: The AI community seems less open than it was, and more dominated by commercial interests. Is it possible that the next big innovation won’t get published?
- 26:50: We’re starting to see papers about alternatives to transformers. It’s very difficult to keep technical ideas secret for a long time.
