Chain of Thought

Galileo
undefined
Mar 12, 2025 • 44min

Using AI to Modernize Your Legacy Applications | MongoDB’s Rachelle Palmer

Imagine cutting your legacy code modernization timeline from years to months. It’s no longer science fiction and this week’s guest is here to tell us how. Rachelle Palmer, Director of Product Management at MongoDB, joins hosts Conor Bronsdon and Atindriyo Sanyal, for a discussion on the groundbreaking ways AI is modernizing legacy applications. At MongoDB, Rachelle's forward-deployed AI engineering team is tackling the challenge of transforming complex, outdated codebases, freeing developers from technical debt. She details how LLMs are automating tasks like improving documentation, test generation, and even business logic conversion, dramatically reducing modernization timelines from years to months. What once demanded teams of dozens can now be achieved with a small, highly efficient team.Chapters:00:00 Introduction and Host Welcome00:58 Challenges in Modernizing Legacy Applications02:52 Real-World Examples of Code Modernization04:00 The Role of LLMs in Code Modernization08:01 Measuring Success in AI-Powered Modernization12:28 The Future of AI in Engineering16:17 Evaluating Modernization Success21:12 Returning to Your Startup Roots29:07 Forward Deployed AI Engineers35:36 Importance of Academic Research in AI42:10 Conclusion and FarewellFollow the hostsFollow⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠Follow⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠Follow⁠⁠⁠⁠ Vikram⁠⁠⁠⁠Follow⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠Follow Today's Guest(s)⁠Rachelle PalmerMongoDBApplication Modernization FactoryCheck out Galileo⁠⁠⁠⁠⁠Try Galileo⁠⁠
undefined
Mar 5, 2025 • 29min

Can Your AI Strategy Be Future-Proof? | Galileo’s Vikram Chatterji

This week, we're sharing a special episode courtesy of 'Dev Interrupted.' Our co-host, Galileo CEO Vikram Chatterji, recently joined theDev Interrupted team for an engaging discussion on AI strategy. We were so impressed by the conversation that we wanted to share it with our audience, and they were kind enough to let us. We hope you enjoy it!From Dev Interrupted:"Vikram Chatterji joins Dev Interrupted’s Andrew Zigler to discuss how engineering leaders can future-proof their AI strategy and navigate an emerging dilemma: the pressure to adopt AI to stay competitive, while justifying AI spending and avoiding risky investments.To accomplish this, Vikram emphasizes the importance of establishing clear evaluation frameworks, prioritizing AI use cases based on business needs and understanding your company's unique cultural context when deploying AI."Chapters:00:00 Introduction and Special Announcement01:14 Welcome to Dev Interrupted01:42 Challenges in AI Adoption03:16 Balancing Business Needs and AI06:15 Crawl, Walk, Run Approach10:52 Building Trust and Prototyping13:07 AI Agents as Smart Routers13:50 Galileo's Role in AI Development16:25 Evaluating AI Systems25:36 Skills for Engineering Leaders27:35 Conclusion Follow the hostsFollow⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠Follow⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠Follow⁠⁠⁠ Vikram⁠⁠⁠Follow⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠Follow Dev InterruptedPodcastSubstackLinkedInFollow Dev Interrupted HostsAndrewBenCheck out Galileo⁠⁠⁠⁠Try Galileo⁠⁠
undefined
9 snips
Feb 12, 2025 • 41min

The Making of Gemini 2.0: DeepMind's Approach to AI Development and Deployment | Logan Kilpatrick

Logan Kilpatrick, Senior Product Manager at Google DeepMind, shares fascinating insights into the making of Gemini 2.0. He discusses Gemini's strength as a premier AI model, showcasing its multimodal capabilities and unique function calling approach. Logan highlights the role of Google's hardware in enhancing performance and long-context capabilities. The conversation also touches on the potential of vision-first AI agents and how Gemini is set to revolutionize developer experiences by integrating seamlessly into existing ecosystems.
undefined
Feb 5, 2025 • 33min

DeepSeek Fallout, Export Controls & Agentic Evals

Hosts dive into the significant impact of DeepSeek's latest R1 model on the open-source AI landscape. They discuss export controls and their mixed effects on global innovation, hinting at a shift towards "Agents as a Service." The necessity for robust evaluation frameworks for increasingly complex agentic systems takes center stage, revealing challenges in measuring performance. The launch of customizable evaluation tools is highlighted as a game-changer for developers, promising a safer trajectory for AI agents.
undefined
Jan 29, 2025 • 34min

AI, Open Source & Developer Safety | Block’s Rizel Scarlett

As DeepSeek so aptly demonstrated, AI doesn’t need to be closed source to be successful. This week, Rizel Scarlett, a Staff Developer Advocate at Block, joins Conor Bronsdon to discuss the intersections between AI, open source, and developer advocacy. Rizel shares her journey into the world of AI, her passion for empowering developers, and her work on Block's new AI initiative, Goose, an on-machine developer agent designed to automate engineering tasks and enhance productivity. Conor and Rizel also explore how AI can enable psychological safety, especially for junior developers. Building on this theme of community, they also dive into topics such as responsible AI development, ethical considerations in AI, and the impact of community involvement when building open source developer tools. Chapters: 00:00 Rizel's Role at Block 02:41 Introducing Goose: Block's AI Agent 06:30 Psychological Safety and AI for Developers 11:24 AI Tools and Team Dynamics 17:28 Open Source AI and Community Involvement 25:29 Future of AI in Developer Communities 27:47 Responsible and Ethical Use of AI 31:34 Conclusion Follow Conor Bronsdon: https://www.linkedin.com/in/conorbronsdon/ Rizel Scarlett LinkedIn: https://www.linkedin.com/in/rizel-bobb-semple/ Website: https://blackgirlbytes.dev/ Show Notes Learn more about Goose: https://block.github.io/goose/
undefined
Jan 15, 2025 • 33min

AI in 2025: Agents & The Rise of Evaluation Driven Development

"In the next three to five years, every piece of software that is built on this planet will have some sort of AI baked into it." - Atin Sanyal Chain of Thought is back for its second season, and this episode dives headfirst into the possibilities AI holds for 2025 and beyond. Join Conor Bronson as he chats with Galileo co-founders Yash Sheth (COO) and Atindriyo Sanyal (CTO) about major trends to look for this year. These include AI finding its product "tool stack" fit, generation latency decreasing, AI agents, their potential to revolutionize code generation and other industries, and the crucial role of robust evaluation tools in ensuring the responsible and effective deployment of these agents. Yash and Atin also highlight Galileo's focus on building trust and security in AI applications through scalable evaluation intelligence. They emphasize the importance of quantifying application behavior, enforcing metrics in production, and adapting to the evolving needs of AI development. Finally, they discuss Galileo's vision for the future and their active pursuit of partnerships in 2025 to contribute to a more reliable and trustworthy AI ecosystem. Chapters: 00:00 AI Trends and Predictions for 2025 02:55 Advancements in LLMs and Code Generation 05:16 Challenges and Opportunities in AI Development 10:40 Evaluating AI Agents and Applications 16:07 Building Evaluation Intelligence 23:41 Research Opportunities 29:50 Advice for Leveraging AI in 2025 32:00 Closing Remarks Show Notes: Check out Galileo⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow Yash Follow Atin Follow Conor
undefined
Jan 8, 2025 • 35min

Now is the Time to Build | Weaviate’s Bob van Luijt

Join Bob van Luijt, CEO and co-founder of Weaviate, an AI-native database innovator, as he dives into the future of AI infrastructure. He passionately asserts that now is the time to build and adapt to evolving tech. Bob discusses the importance of generative feedback loops and agent architectures, which could revolutionize data management. They also tackle the challenges of documentation and developer experience as key factors for successful AI implementation. Prepare for insights that inspire action and innovation in the AI landscape!
undefined
Dec 18, 2024 • 42min

How AI Assistants Can Enhance Human Connection | Twilio’s Vinnie Giarrusso

Vinnie Giarrusso, Principal Software Engineer at Twilio, discusses how AI assistants can enrich human connection rather than replace it. He presents AI as 'async junior digital employees' that handle mundane tasks, freeing humans for meaningful interactions. The conversation dives into the potential of AI in education, emphasizing personalized learning and the disruption of traditional mentorship. Vinnie also highlights Twilio's partnership with Galileo, aiming to innovate AI integration and empower users. This thought-provoking dialogue blends technology with human experiences.
undefined
Dec 11, 2024 • 51min

Lessons from Deploying AI at Enterprise Scale | ServiceTitan, Indeed & Twilio

This week, a panel of experts (Mehmet Murat Ezbiderli, ServiceTitan; Grant Ledford, Indeed; and Vinnie Giarrusso, Twilio) join Atin Sanyal (CTO, Galileo) and Conor Bronsdon (Developer Awareness, Galileo) to explore the challenges and opportunities of deploying GenAI at enterprise scale in a conversation that's a wake-up call for any business leader looking to harness the power of AI. Together, Atin & Conor break down key considerations like performance, cost, and model selection, emphasizing the need for robust evaluation frameworks and a shift in developer mindset. Atin then sits down with our panel of AI engineering experts to discuss their firsthand experiences with enterprise AI, including the trade-offs of building AI systems, the evolving tools and frameworks available, and the impact these technologies are having on their organizations. Chapters: 00:00 Enterprise Scale Deployment 05:17 Cost, Performance, and Model Selection 08:59 Building and Integrating GenAI Systems 15:26 Emerging Enterprise Use Cases 18:12 Predictions for AI in 2025 27:28 Panel Discussion: Deploying AI at Enterprise Scale 31:19 Gen AI Solutions and Challenges 33:12 Building & Deploying Traditional Infrastructure vs GenAI Infrastructure 34:36 How to Assemble Your GenAI Stack 40:39 Today's Best GenAI Use Cases 48:15 Enterprise AI Trends for 2025 50:36 Closing Remarks and Future Outlook Follow: Atin Sanyal: ⁠⁠⁠https://www.linkedin.com/in/atinsanyal/⁠ Mehmet Murat Ezbiderli: https://www.linkedin.com/in/mehmet-murat-ezbiderli-b894a49/ Grant Ledford: https://www.linkedin.com/in/grant-ledford-36b146a5/ Vinnie Giarrusso: https://www.linkedin.com/in/vinniegiarrusso/ Show notes: Watch all of Productionize: https://www.galileo.ai/genai-productionize-2-0
undefined
Dec 4, 2024 • 48min

Practical Lessons for GenAI Evals | Chip Huyen & Vivienne Zhang

As AI agents and multimodal models become more prevalent, understanding how to evaluate GenAI is no longer optional – it's essential.  Generative AI introduces new complexities in assessment compared to traditional software, and this week on Chain of Thought we’re joined by Chip Huyen (Storyteller, Tép Studio), Vivienne Zhang (Senior Product Manager, Generative AI Software, Nvidia) for a discussion on AI evaluation best practices.  Before we hear from our guests, Vikram Chatterji (CEO, Galileo) and Conor Bronsdon (Developer Awareness, Galileo) give their takes on the complexities of AI evals and how to overcome them through the use of objective criteria in evaluating open-ended tasks, the role of hallucinations in AI models, and the importance of human-in-the-loop systems. Afterwards, Chip and Vivienne sit down with Atin Sanyal (Co-Founder & CTO, Galileo) to explore common evaluation approaches, best practices for building frameworks, and implementation lessons. They also discuss the nuances of evaluating AI coding assistants and agentic systems. Chapters: 00:00 Challenges in Evaluating Generative AI 05:45 Evaluating AI Agents 13:08 Are Hallucinations Bad? 17:12 Human in the Loop Systems 20:49 Panel discussion begins 22:57 Challenges in Evaluating Intelligent Systems 24:37 User Feedback and Iterative Improvement 26:47 Post-Deployment Evaluations and Common Mistakes 28:52 Hallucinations in AI: Definitions and Challenges 34:17 Evaluating AI Coding Assistants 38:15 Agentic Systems: Use Cases and Evaluations 43:00 Trends in AI Models and Hardware 45:42 Future of AI in Enterprises 47:16 Conclusion and Final Thoughts Follow: Vikram Chatterji: https://www.linkedin.com/in/vikram-chatterji/ Atin Sanyal: ⁠⁠https://www.linkedin.com/in/atinsanyal/ Conor Bronsdon: https://www.linkedin.com/in/conorbronsdon/ Chip Huyen: ⁠https://www.linkedin.com/in/chiphuyen/⁠ Vivienne Zhang: ⁠⁠https://www.linkedin.com/in/viviennejiaozhang/ Show notes: Watch all of Productionize 2.0: ⁠https://www.galileo.ai/genai-productionize-2-0⁠

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app