How AI Happens cover image

How AI Happens

Latest episodes

undefined
Dec 30, 2024 • 35min

Vanguard Principal of Center for Analytics & Insights Jing Wang

Jing explains how Vanguard uses machine learning and reinforcement learning to deliver personalized "nudges," helping investors make smarter financial decisions. Jing dives into the importance of aligning AI efforts with Vanguard’s mission and discusses generative AI’s potential for boosting employee productivity while improving customer experiences. She also reveals how generative AI is poised to play a key role in transforming the company's future, all while maintaining strict data privacy standards.Key Points From This Episode:Jing Wang’s time at Fermilab and the research behind her PhD in high-energy physics.What she misses most about academia and what led to her current role at Vanguard.How she aligns her team’s AI strategy with Vanguard’s business goals.Ways they are utilizing AI for nudging investors to make better decisions.Their process for delivering highly personalized recommendations for any given investor.Steps that ensure they adhere to finance industry regulations with their AI tools.The role of reinforcement learning and their ‘next best action’ models in personalization.Their approach to determining the best use of their datasets while protecting privacy.Vanguard’s plans for generative AI, from internal productivity to serving clients.How Jing stays abreast of all the latest developments in physics.Quotes:“We make sure all our AI work is aligned with [Vanguard’s] four pillars to deliver business impact.” — Jing Wang [0:08:56]“We found those simple nudges have tremendous power in terms of guiding the investors to adopt the right things. And this year, we started to use a machine learning model to actually personalize those nudges.” — Jing Wang [0:19:39]“Ultimately, we see that generative AI could help us to build more differentiated products. – We want to have AI be able to train language models [to have] much more of a Vanguard mindset.” — Jing Wang [0:29:22]Links Mentioned in Today’s Episode:Jing Wang on LinkedInVanguardFermilabHow AI HappensSama
undefined
Dec 23, 2024 • 30min

Sema4 CTO Ram Venkatesh

Key Points From This Episode:Ram Venkatesh describes his career journey to founding Sema4.ai. The pain points he was trying to ease with Sema4.ai.How our general approach to big data is becoming more streamlined, albeit rather slowly. The ins and outs of Sema4.ai and how it serves its clients. What Ram means by “agent” and “agent agency” when referring to machine learning copilots.The difference between writing a program to execute versus an agent reasoning with it.  Understanding the contextual work training method for agents. The relationship between an LLM and an agent and the risks of training LLMs on agent data.Exploring the next generation of LLM training protocols in the hopes of improving efficiency. The requirements of an LLM if you’re not training it and unpacking modality improvements. Why agent input and feedback are major disruptions to SaaS and beyond. Our guest shares his hopes for the future of AI. Quotes:“I’ve spent the last 30 years in data. So, if there’s a database out there, whether it’s relational or object or XML or JSON, I’ve done something unspeakable to it at some point.” — @ramvzz [0:01:46]“As people are getting more experienced with how they could apply GenAI to solve their problems, then they’re realizing that they do need to organize their data and that data is really important.” — @ramvzz [0:18:58]“Following the technology and where it can go, there’s a lot of fun to be had with that.” — @ramvzz [0:23:29]“Now that we can see how software development itself is evolving, I think that 12-year-old me would’ve built so many more cooler things than I did with all the tech that’s out here now.” — @ramvzz [0:29:14]Links Mentioned in Today’s Episode:Ram Venkatesh on LinkedInRam Venkatesh on XSema4.aiClouderaHow AI HappensSama
undefined
Dec 18, 2024 • 50min

Unpacking Meta's SAM-2 with Sama Experts Pascal & Yannick

Pascal & Yannick delve into the kind of human involvement SAM-2 needs before discussing the use cases it enables. Hear all about the importance of having realistic expectations of AI, what the cost of SAM-2 looks like, and the the importance of humans in LLMs.Key Points From This Episode:Introducing Pascal Jauffret and Yannick Donnelly to the show.Our guests explain what the SAM-2 model is. A description of what getting information from video entails.What made our guests interested in researching SAM-2. A few things that stand out about this tool. The level of human involvement that SAM-2 needs. Some of the use cases they see SAM-2 enabling. Whether manually annotating is easier than simply validating data. The importance of setting realistic expectations of what AI can do. When LLM models work best, according to our experts.A discussion about the cost of the models at the moment. Why humans are so important in coaching people to use models. What we can expect from Sama in the near future. Quotes:“We’re kind of shifting towards more of a validation period than just annotating from scratch.” — Yannick Donnelly [0:22:01]“Models have their place but they need to be evaluated.” — Yannick Donnelly [0:25:16]“You’re never just using a model for the sake of using a model. You’re trying to solve something and you’re trying to improve a business metric.” — Pascal Jauffret [0:32:59]“We really shouldn’t underestimate the human aspect of using models.” — Pascal Jauffret [0:40:08]Links Mentioned in Today’s Episode:Pascal Jauffret on LinkedInYannick Donnelly on LinkedInHow AI HappensSama
undefined
Dec 16, 2024 • 33min

Qualcomm Senior Director Siddhika Nevrekar

Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust.  We  unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.Key Points From This Episode:Siddhika Nevrekar walks us through her career pivot from cloud to edge computing. Why she’s passionate about overcoming her fears and achieving the impossible. Increasing compute on edge devices versus developing more efficient AI models.Siddhika explains what makes Apple a truly unique company. The original inspirations for edge computing and how the conversation has evolved. Unpacking the current state of free computing and what may happen in the near future. The challenges of training AI models for edge. Exploring Siddhika’s role at Qualcomm and what she hopes to achieve. Diving deeper into her process for achieving her goals. Common industry challenges that developers are facing and her methods for solving themQuotes:“Ultimately, we are constrained with the size of the device. It’s all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_ “By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_ Links Mentioned in Today’s Episode:Siddhika Nevrekar on LinkedInSiddhika Nevrekar on XQualcomm AI HubHow AI HappensSama
undefined
Dec 3, 2024 • 28min

Block Developer Advocate Rizel Scarlett

Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it’s important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business. Key Points From This Episode:Rizel Scarlett describes the role and responsibilities of a developer advocate. Her role in getting others to understand how GitHub Copilot should be used. Exploring her ongoing projects and current duties at Block. How the conversation around AI copilot tools has shifted in the last 18 months.   The importance of objection handling and why companies must pay more attention to it.  AI hallucinations and Rizel’s advice for approaching this particular pain point. Why “I don’t know” should be encouraged as a response from AI companions, not shunned. Taking a closer look at how Block addresses AI hallucinations. The burdens of responsibility of users of AI, and the need to democratize access to AI tools. Unpacking G{Code} House and Rizel’s working relationship with this learning community.Understanding what prevents Indigenous and women of color from having careers in tech.The ideal relationship between a developer advocate and the technical arm of a business. Quotes:“Every company is embedding AI into their product someway somehow, so it’s being more embraced.” — @blackgirlbytes [0:11:37]“I always respect someone that’s like, ‘I don’t know, but this is the closest I can get to it.’” — @blackgirlbytes [0:15:25]“With AI tools, when you’re more specific, the results are more refined.” — @blackgirlbytes [0:16:29]Links Mentioned in Today’s Episode:Rizel ScarlettRizel Scarlett on LinkedInRizel Scarlett on InstagramRizel Scarlett on XBlockGooseGitHubGitHub CopilotG{Code} HouseHow AI HappensSama
undefined
Nov 21, 2024 • 28min

dbt Labs Co-Founder Drew Banin

Key Points From This Episode:Drew and his co-founders’ background working together at RJ Metrics.The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.Initial adoption of dbt Labs and why it was so well-received from the very beginning.The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.Unpacking examples where LLMs struggle with specific questions, like math problems.The importance of thoughtful prompt engineering and application design with LLMs.What is needed to maximize the utility of LLMs in enterprise settings.How understanding the specific use case can help you get better results from LLMs.What developers can do to constrain the search space and provide better output.Why Drew believes prompt engineering will become less important for the average user.The exciting potential of vector embeddings and the ongoing evolution of LLMs.Quotes:“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]Links Mentioned in Today’s Episode: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsDrew Banin on LinkedIndbt LabsHow AI HappensSama 
undefined
Oct 31, 2024 • 25min

Saidot CEO Meeri Haataja

In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!Key Points From This Episode:Insights from the AI Pact conference. The reality of holding AI companies accountable. What inspired her to start Saidot to offer solutions for AI transparency and accountability.How Meeri assesses companies and their organizational culture. What makes generative AI more risky than other forms of machine learning. Reasons that use-related risks are the most common sources of AI risks.Meeri’s thoughts on the impact of the Use AI Act in the EU. Quotes:“It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]“Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]“Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]“Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]Links Mentioned in Today’s Episode:SaidotMeeri Haataja on LinkedInMeeri Haataja on InstagramMeeri Haataja on XHow AI HappensSama
undefined
Oct 18, 2024 • 34min

FICO Chief Analytics Officer Dr. Scott Zoldi

In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively. Key Points From This Episode:How Scott integrates his role as an inventor with his duties as FICO CAO.Why he believes that mindshare is an essential leadership quality.What sparked his interest in responsible AI as a physicist.The shifting demographics of those who develop machine learning models.Insight into the use of blockchain to advance responsible AI.How FICO uses blockchain to ensure auditable ML decision-making.Operationalizing AI and the typical mistakes companies make in the process.The value of integrating data science and software engineering teams from the start.A fear-free perspective on what Scott finds so uniquely exciting about AI.Quotes:“I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]“[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]“Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]Links Mentioned in Today’s Episode:FICODr. Scott ZoldiDr. Scott Zoldi on LinkedInDr. Scott Zoldi on XFICO Falcon Fraud ManagerHow AI HappensSama
undefined
Oct 10, 2024 • 29min

Lemurian Labs CEO Jay Dawani

Jay breaks down the critical role of software optimizations and how they drive performance gains in AI, highlighting the importance of reducing inefficiencies in hardware. He also discusses the long-term vision for Lemurian Labs and the broader future of AI, pointing to the potential breakthroughs that could redefine industries and accelerate innovation, plus a whole lot more. Key Points From This Episode:Jay’s diverse professional background and his attraction to solving unsolvable problems.How his unfinished business in robotics led him to his current work at Lemurian Labs.What he has learned from being CEO and the biggest obstacles he has had to overcome.Why he believes engineers with a problem-solving mindset can be effective CEOs.Lemurian Labs: making AI computing more efficient, affordable, and environmentally friendly.The critical role of software in increasing AI efficiency.Some of the biggest challenges in programming GPUs.Why better software is needed to optimize the use of hardware.Common inefficiencies in AI development and how to solve them.Reflections on the future of Lemurian Labs and AI more broadly.Quotes:“Every single problem I've tried to pick up has been one that – most people have considered as being almost impossible. There’s something appealing about that.” — Jay Dawani [0:02:58]“No matter how good of an idea you put out into the world, most people don't have the motivation to go and solve it. You have to have an insane amount of belief and optimism that this problem is solvable, regardless of how much time it's going to take.” — Jay Dawani [0:07:14]“If the world's just betting on one company, then the amount of compute you can have available is pretty limited. But if there's a lot of different kinds of compute that are slightly optimized with different resources, making them accessible allows us to get there faster.” — Jay Dawani [0:19:36]“Basically what we're trying to do [at Lemurian Labs] is make it easy for programmers to get [the best] performance out of any hardware.” — Jay Dawani [0:20:57]Links Mentioned in Today’s Episode:Jay Dawani on LinkedInLemurian LabsHow AI HappensSama
undefined
Sep 30, 2024 • 35min

Intel VP & GM of Strategy & Execution Melissa Evers

Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.Key Points From This Episode:An introduction to Melissa Evers, Vice President and General Manager of Strategy and Execution at Intel Corporation.More on the communities she has played a leadership role in.Why open source governance is not an oxymoron and why it is critical.The hard work that goes on behind the scenes at open source.What to strive for when building a healthy open source community.Intel’s perspective on the importance of open source and open AI.Enabling developer choices about open source or proprietary options.Growing awareness around building architecture around the freedom of choice.Identifying that a model is a bad choice or lacking in accuracy.Thinking critically about future-proofing yourself with regard to model choice. Opportunities for large and smaller models.Finding the perfect intersection between value delivery, value creation, and cost. Common challenges in the context of AI, including the potential of generative AI and its implementation.Why there is such a commonality of use cases in the realm of generative AI.Where true innovation and value lies even though there may be commonality in use cases.Examples of creative uses of generative AI; retail, compound AI systems, manufacturing, and more.Understanding that innovation in this area is still in its early development stages. How Wardley Mapping can support an understanding of scale. What she is most excited about for the future of AI: Rapid learning in healthcare. Quotes:“One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]“It’s important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]“We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]“I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]Links Mentioned in Today’s Episode:Melissa Evers on LinkedInMelissa Evers on XIntel Corporation 

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode