

Episode 5: Yejin Choi
21 snips Nov 16, 2023
Computer science professor and AI expert Yejin Choi discusses training language models, the challenges of robots picking tools, and the role of universities in AI research.
AI Snips
Chapters
Transcript
Episode notes
LLM Opacity
- Large language models (LLMs) are opaque; their inner workings are not well-understood.
- This lack of understanding makes it difficult to determine why LLMs perform well on some tasks and poorly on others.
Prompt Engineering and Its Implications
- Prompt engineering has become important for getting desired results from LLMs.
- However, there are differing views on the effectiveness and meaning of prompt engineering's success.
Scaling LLMs and Uncertain Future
- Scaling up LLMs like GPT-3 to GPT-4 has led to dramatic performance improvements.
- However, it's unclear if further scaling will yield similar improvements or reveal new failure modes.