
The Future of AI Systems: Open Models and Infrastructure Challenges
AI Engineering Podcast
00:00
Navigating the Evolution of AI Adoption in Organizations
This chapter examines the journey organizations have taken in integrating AI technologies, from early experiments to self-evolving systems. It addresses challenges such as vendor dependence and scaling, while emphasizing the significance of model quality and operational performance.
Transcript
Play full episode
Transcript
Episode notes
Summary
In this episode of the AI Engineering Podcast Jamie De Guerre, founding SVP of product at Together.ai, explores the role of open models in the AI economy. As a veteran of the AI industry, including his time leading product marketing for AI and machine learning at Apple, Jamie shares insights on the challenges and opportunities of operating open models at speed and scale. He delves into the importance of open source in AI, the evolution of the open model ecosystem, and how Together.ai's AI acceleration cloud is contributing to this movement with a focus on performance and efficiency.
Announcements
Parting Question
In this episode of the AI Engineering Podcast Jamie De Guerre, founding SVP of product at Together.ai, explores the role of open models in the AI economy. As a veteran of the AI industry, including his time leading product marketing for AI and machine learning at Apple, Jamie shares insights on the challenges and opportunities of operating open models at speed and scale. He delves into the importance of open source in AI, the evolution of the open model ecosystem, and how Together.ai's AI acceleration cloud is contributing to this movement with a focus on performance and efficiency.
Announcements
- Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
- Your host is Tobias Macey and today I'm interviewing Jamie de Guerre about the role of open models in the AI economy and how to operate them at speed and at scale
- Introduction
- How did you get involved in machine learning?
- Can you describe what Together AI is and the story behind it?
- What are the key goals of the company?
- The initial rounds of open models were largely driven by massive tech companies. How would you characterize the current state of the ecosystem that is driving the creation and evolution of open models?
- There was also a lot of argument about what "open source" and "open" means in the context of ML/AI models, and the different variations of licenses being attached to them (e.g. the Meta license for Llama models). What is the current state of the language used and understanding of the restrictions/freedoms afforded?
- What are the phases of organizational/technical evolution from initial use of open models through fine-tuning, to custom model development?
- Can you outline the technical challenges companies face when trying to train or run inference on large open models themselves?
- What factors should a company consider when deciding whether to fine-tune an existing open model versus attempting to train a specialized one from scratch?
- While Transformers dominate the LLM landscape, there's ongoing research into alternative architectures. Are you seeing significant interest or adoption of non-Transformer architectures for specific use cases?
- When might those other architectures be a better choice?
- While open models offer tremendous advantages like transparency, control, and cost-effectiveness, are there scenarios where relying solely on them might be disadvantageous?
- When might proprietary models or a hybrid approach still be the better choice for a specific problem?
- Building and scaling AI infrastructure is notoriously complex. What are the most significant technical or strategic challenges you've encountered at Together AI while enabling scalable access to open models for your users?
- What are the most interesting, innovative, or unexpected ways that you have seen open models/the TogetherAI platform used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on powering AI model training and inference?
- Where do you see the open model space heading in the next 1-2 years? Any specific trends or breakthroughs you anticipate?
Parting Question
- From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
- Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers.
- Together AI
- Fine Tuning
- Post-Training
- Salesforce Research
- Mistral
- Agentforce
- Llama Models
- RLHF == Reinforcement Learning from Human Feedback
- RLVR == Reinforcement Learning from Verifiable Rewards
- Test Time Compute
- HuggingFace
- RAG == Retrieval Augmented Generation
- Google Gemma
- Llama 4 Maverick
- Prompt Engineering
- vLLM
- SGLang
- Hazy Research lab
- State Space Models
- Hyena Model
- Mamba Architecture
- Diffusion Model Architecture
- Stable Diffusion
- Black Forest Labs Flux Model
- Nvidia Blackwell
- PyTorch
- Rust
- Deepseek R1
- GGUF
- Pika Text To Video
The AI-powered Podcast Player
Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!