

Complete AI Course on Prompting, Evals, RAG, and Fine-Tuning | Adam Loving (Meta)
36 snips Jun 1, 2025
In this discussion, Adam Loving, an AI partner engineer at Meta, shares his expertise in integrating AI into products for numerous companies. He breaks down the essentials of crafting effective AI prompts and the nuances of evaluation strategies. Adam explains the advantages of retrieval-augmented generation over fine-tuning models, demystifies vector databases, and highlights the potential of open-source AI like Meta's Llama 4. His insights are invaluable for anyone looking to enhance AI functionality in their business.
AI Snips
Chapters
Books
Transcript
Episode notes
Effective Prompt Engineering
- Separate the system prompt from the main prompt for clarity and consistency.
- Use few-shot examples and multi-step reasoning in prompts for better AI responses.
Mastering AI Evaluations
- Write evals to test AI answers and avoid breaking features during updates.
- Use humans, programmatic checks, or other models to grade AI responses efficiently.
Granular AI Grading Methods
- Grade AI answers on elements like accuracy, tone, and hallucination.
- Use plus-one scoring per correct item to improve evaluation granularity.