
How AI Happens
How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
Latest episodes

Mar 15, 2024 • 35min
Carrier Head of AI Seth Walker
Key Points From This Episode:Welcoming Seth Walker to the podcast.The importance of being agile in AI.All about Seth’s company, Carrier, and what they do.Seth tells us about his background and how he ended up at Carrier.How Seth goes about unlocking the power of AI.The different levels of success when it comes to AI creation and how to measure them.Seth breaks down the different things Carrier focuses on.The importance of prompt engineering.What makes him excited about the new iterations of machine learning.Quotes:“In many ways, Carrier is going to be a necessary condition in order for AI to exist.” — Seth Walker [0:04:08]“What’s hard about generating value with AI is doing it in a way that is actually actionable toward a specific business problem.” — Seth Walker [0:09:49]“One of the things that we’ve found through experimentation with generative AI models is that they’re very sensitive to your content. I mean, there’s a reason that prompt engineering has become such an important skill to have.” — Seth Walker [0:25:56]Links Mentioned in Today’s Episode:Seth Walker on LinkedInCarrierHow AI HappensSama

Feb 29, 2024 • 29min
Google Cloud's VP Global AI Business Philip Moyer
Philip recently had the opportunity to speak with 371 customers from 15 different countries to hear their thoughts, fears, and hopes for AI. Tuning in you’ll hear Philip share his biggest takeaways from these conversations, his opinion on the current state of AI, and his hopes and predictions for the future. Our conversation explores key topics, like government and company attitudes toward AI, why adversarial datasets will need to be audited, and much more. To hear the full scope of our conversation with Philip – and to find out how 2024 resembles 1997 – be sure to tune in today! Key Points From This Episode:Some background on Philip Moyer and his role as part of Google’s AI engineering team.What he learned from speaking with 371 customers from 15 different countries about AI.Philip shares his insights on how governments and companies are approaching AI.Recognizing the risks and requirements of models and how to manage them.Adversarial datasets; what they are and why they need to be audited.Understanding how adversarial datasets can vary between industries.A breakdown of Google’s approach to adversarial datasets in different languages.The most relevant takeaways from Philip’s cross-continental survey.How 2024 resembles the technological and competitive business landscape of 1997.Google’s partnership with Nvidia and how they are providing technologies at every layer.The new class of applications that come with generative AI.Using a company’s proprietary data to train generative AI models.The collective challenges we are all facing when it comes to creating generative AI at scale.Understanding the vectorization of knowledge and why it will need to be auditable.Philip shares what he is most excited about when it comes to AI.Quotes:“What's been so incredible to me is how forward-thinking – a lot of governments are on this topic [of AI] and their understanding of – the need to be able to make sure that both their citizens as well as their businesses make the best use of artificial intelligence.” — Philip Moyer [0:02:52]“Nobody's ahead and nobody's behind. Every single company that I'm speaking to, has about one to five use cases live. And they have hundreds that are on the docket.” — Philip Moyer [0:15:36]“All of us are facing the exact same challenges right now of doing [generative AI] at scale.” — Philip Moyer [0:17:03]“You should just make an assumption that you're going to be somewhere on the order of about 10 to 15% more productive with AI.” — Philip Moyer [0:25:22] “[With AI] I get excited around proficiency and job satisfaction because I really do think – we have an opportunity to make work fun again.” — Philip Moyer [0:27:10]Links Mentioned in Today’s Episode:Philip Moyer on LinkedInHow AI HappensSama

Feb 16, 2024 • 22min
Meta VP of AI Research Joelle Pineau
Joelle further discusses the relationship between her work, AI, and the end users of her products as well as her summation of information modalities, world models versus word models, and the role of responsibility in the current high-stakes of technology development. Key Points From This Episode:Joelle Pineau's professional background and how she ended up at Meta.The aspects of AI robotics that fascinate her the most.Why elegance is an important element in Joelle's machine learning systems.How asking the right question is the most vital part of research and how to get better at it.FRESCO: how Joelle chooses which projects to work on.The relationship between her work, AI, and the end users of her final products.What success looks like for her and her team at Meta.World models versus word models and her summation of information modalities.What Joelle thinks about responsibility in the current high-stakes of technology development.Quotes:“Perhaps, the most important thing in research is asking the right question.” — @jpineau1 [0:05:10]“My role isn't to set the problems for [the research team], it's to set the conditions for them to be successful.” — @jpineau1 [0:07:29]“If we're going to push for state-of-the-art on the scientific and engineering aspects, we must push for state-of-the-art in terms of social responsibility.” — @jpineau1 [0:20:26]Links Mentioned in Today’s Episode:Joelle Pineau on LinkedInJoelle Pineau on XMetaHow AI HappensSama

Jan 24, 2024 • 25min
Alberta Machine Intelligence Institute Product Owner Mara Cairo
Key Points From This Episode:Amii’s machine learning project management tool: MLPL.Amii’s ultimate goal of building capacity and how it differs from an agency model. Asking the right questions to ascertain the appropriate use for AI. Instances where AI is not a relevant solution. Common challenges people face when adopting AI strategies. Mara’s perspective on the education necessary to excel in a career in machine learning.Quotes:“Amii is all about capacity building, so we’re not a traditional agent in that sense. We are trying to educate and inform industry on how to do this work, with Amii at first, but then without Amii at the end.” — Mara Cairo [0:06:20]“We need to ask the right questions. That’s one of the first things we need to do, is to explore where the problems are.” — Mara Cairo [0:07:46]“We certainly are comfortable turning certain business problems away if we don’t feel it’s an ethical match or if we truly feel it isn’t a problem that will benefit much from machine learning.” — Mara Cairo [0:11:52]Links Mentioned in Today’s Episode:Maria CairoMaria Cairo on LinkedInAlberta Machine Intelligence UnitHow AI HappensSama

Dec 8, 2023 • 27min
10 Years of FAIR at Meta with Sama Director of ML Jerome Pasquero
Discussion on Meta's Segment Anything Model and the limitations of the 'some' model in object segmentation. Exploring human augmentation with Ego XO4D technology and the potential of virtual reality. The significance of self-supervised learning and the excitement of unlimited resources.

8 snips
Dec 5, 2023 • 32min
RoviSys Director of Industrial AI Bryan DeBois
Bryan DeBois, Director of Industrial AI at RoviSys, discusses the concept of industrial AI, deep reinforcement learning, machine teaching, and the future of generative AI. They explore the potential applications of AI in the industrial sector, the challenges of replicating human expertise with machines, and the importance of reliable systems. The conversation also delves into the current state of AI in the industrial landscape, the differences between monolithic deep learning and standard deep learning, and the significance of predictability in AI decision-making.

Nov 22, 2023 • 50min
ML Pulse Report with Voxel51 CSO Jason Corso and Sama VP Duncan Curtis
2023 ML Pulse Report Joining us today are our panelists, Duncan Curtis, SVP of AI products and technology at Sama, and Jason Corso, a professor of robotics, electrical engineering, and computer science at the University of Michigan. Jason is also the chief science officer at Voxel51, an AI software company specializing in developer tools for machine learning. We use today’s conversation to discuss the findings of the latest Machine Learning (ML) Pulse report, published each year by our friends at Sama. This year’s report focused on the role of generative AI by surveying thousands of practitioners in this space. Its findings include feedback on how respondents are measuring their model’s effectiveness, how confident they feel that their models will survive production, and whether they believe generative AI is worth the hype. Tuning in you’ll hear our panelists’ thoughts on key questions in the report and its findings, along with their suggested solutions for some of the biggest challenges faced by professionals in the AI space today. We also get into a bunch of fascinating topics like the opportunities presented by synthetic data, the latent space in language processing approaches, the iterative nature of model development, and much more. Be sure to tune in for all the latest insights on the ML Pulse Report!Key Points From This Episode:Introducing today’s panelists, Duncan Curtis and Jason Corso.An overview of what the Machine Learning (ML) Pulse report focuses on.Breaking down what the term generative means in AI.Our thoughts on key findings from the ML Pulse Report.What respondents, and our panelists, think of hype around generative AI.Unpacking one of the biggest advances in generative AI: accessibility.Insights on cloud versus local in an AI context.Generative AI use cases in the field of computer vision.The powerful opportunities presented by synthetic data.Why the role of human feedback in synthetic data is so important.Finding a middle ground between human language and machine understanding.Unpacking the notion of latent space in language processing approaches.How confident respondents feel that their models will survive production.The challenges of predicting how well a model will perform.An overview of the biggest challenges reported by respondents.Suggested solutions from panelists on key challenges from the report.How respondents are measuring the effectiveness of their models.What Duncan and Jason focus on to measure success.Career advice from our panelists on making meaningful contributions to this space.Quotes:“It's really hard to know how well your model is going to do.” — Jason Corso [0:27:10]“With debugging and detecting errors in your data, I would definitely say look at some of the tooling that can enable you to move more quickly and understand your data better.” — Duncan Curtis [0:33:55]“Work with experts – there's no replacement for good experience when it comes to actually boxing in a problem, especially in AI.” — Jason Corso [0:35:37]“It's not just about how your model performs. It's how your model performs when it's interacting with the end user.” — Duncan Curtis [0:41:11]“Remember, what we do in this field, and in all fields really, is by humans, for humans, and with humans. And I think if you miss that idea [then] you will not achieve – either your own potential, the group you're working with, or the tool.” — Jason Corso [0:48:20]Links Mentioned in Today’s Episode:Duncan Curtis on LinkedInJason CorsoJason Corso on LinkedInVoxel512023 ML Pulse ReportChatGPTBardDALL·E 3How AI HappensSama

Nov 10, 2023 • 28min
AMD Senior Director of AI Software Ian Ferreira
Sama 2023 ML Pulse ReportML Pulse Report: How AI Happens Live WebinarAMD's Advancing AI EventOur guest today is Ian Ferreira, the Chief Product Officer for Artificial Intelligence over at Core Scientific until they were purchased by his current employer Advanced Micro Devices, AMD, where he is now the Senior Director of AI Software. In our conversation, we talk about when in his career he shifted his focus to AI, his thoughts on the nobility of ChatGPT and applications beyond advertising for AI, and he touches on the scary aspect of Large Language Models (LLMs). We explore the possibility of replacing our standard conceptions of search, how he conceptualizes his role at AMD, and Ian shares his insights and thoughts on the “Arms Race for GPUs”. Be sure not to miss out on this episode as Ian shares valuable insights from his perspective as the Senior Director of AI Software at AMD. Key Points From This Episode:An introduction to our guest on today’s episode: Ian Ferreira.The point in his career when AI became the main focus. His thoughts on the idea that ChatGPT is noble. The scary aspect of Large Language Models (LLMs).The possibilities of replacing our standard conceptions of search.Ian shares how he conceptualizes his role as Senior Director of AI Software at AMD, and the projects they’re currently working on. His thoughts on the “Arms Race” for GPUs. Ian underlines their partnership with research companies like the Allen Institute.Attempting to make a powerful GPU model easily available to the general public.He explains what he means by a sovereign model. Ian talks about AMD’s upcoming events and announcements. Quotes:“It’s just remarkable, the potential of AI —and now I’m fully in it and I think it’s a game-changer.” — @Ianfe [0:03:41]“There are significantly more noble applications than advertising for AI and ChatGPT was great in that it put a face on AI for a lot of people who couldn’t really get their heads wrapped around [AI].” — @Ianfe [0:04:25]“An LLM allows you to have a natural conversation with the search agent, so to speak.” — @Ianfe [0:09:21]“All our stuff is open-sourced. AMD has a strong ethos, both in open-source and in partnerships. We don’t compete with our customers, and so being open allows you to go and look at all our code and make sure that whatever you are going to deploy is something you’ve looked at.” — @Ianfe [0:12:15]Links Mentioned in Today’s Episode:Advancing AI EventIan Ferreira on LinkedInIan Ferreira on XAMDAMD Software StackHugging FaceAllen InstituteOpen AIHow AI HappensSama

Oct 31, 2023 • 29min
GM for Amazon CodeWhisperer Doug Seven
Generative AI is becoming more common in our lives as the technology grows and evolves. There are now AI companions to help other AI models execute their tasks more efficiently, and Amazon CodeWhisperer (ACW) is among the best in the game. We are joined today by the General Manager of Amazon CodeWhisperer and Director of Software Development at Amazon Web Services (AWS), Doug Seven. We discuss how Doug and his team are able to remain agile in such a huge organization like Amazon before getting a crash course on the two-pizza-team philosophy and everything you need to know about ACW and how it works. Then, we dive into the characteristics that make up a generative AI model, why Amazon felt it necessary to create its own AI companion, why AI is not here to take our jobs, how Doug and his team ensure that ACW is safe and responsible, and how generative AI will become common in most households much sooner than we may think. Key Points From This Episode:Introducing the Director of Software Development and General Manager of Amazon CodeWhisperer at Amazon Web Services, Doug Seven. A day in the life of Doug in his role at Amazon. What his team currently looks like.Whether he and his team retain their agility in a massive organization like Amazon. A crash course on the two-pizza-team philosophy. How Doug ended up at Amazon Web Services (AWS) and leading ACW. What ACW is, how it works, and why you need it for you and your business. Assessing if generative AI models need to produce new code to be considered generative. Why Amazon felt it pertinent to create its own AI companion in ACW. How to use ACW to its full potential. The way recommendations change and improve once ACW has access to your code base. Examples that reiterate how AI is not here to take your job but to do the jobs you hate.Guardrails that ACW is putting up to ensure that it remains safe, secure, and responsible. How generative AI will become more accessible to the masses as it evolves.

Oct 27, 2023 • 31min
Bell Senior Data Scientist Dalia Shanshal
In today’s episode, we are joined by Dalia Shanshal, Senior Data Scientist at Bell, Canada's largest communications company that offers advanced broadband wireless, Internet, TV, media, and business communications services. With over five years of experience working on hands-on projects, Dalia has a diverse background in data science and AI. We start our conversation by talking about the recent GeekFest Conference, what it is about, and key takeaways from the event. We then delve into her professional career journey and how a fascinating article inspired her to become a data scientist. During our conversation, Dalia reflects on the evolving nature of data science, discussing the skills and qualities that are now more crucial than ever for excelling in the field. We also explore why creativity is essential for problem-solving, the value of starting simple, and how to stand out as a data scientist before she explains her unique root cause analysis framework.Key Points From This Episode:Highlights of the recent Bell GeekFest Conference.AI-related topics focused on at the event.Why Bell’s GeekFest is only an internal conference.Details about Bell and Dalia’s role at the company.Her background and professional career journey.How the role of a data scientist has changed over time.The importance of creativity in problem-solving.Overview of why quality data is fundamental.Qualities of a good data scientist.The research side of data science.Dalia reveals her root cause analysis framework.Exciting projects she is currently working on.Tweetables:“What I do is to try leverage AI and machine learning to speed up and fastrack investigative processes.” — Dalia Shanshal [0:06:52]“Data scientists today are key in business decisions. We always need business decisions based on facts and data, so the ability to mine that data is super important.” — Dalia Shanshal [0:08:35]“The most important skill set [of a data scientist] is to be able to [develop] creative approaches to problem-solving. That is why we are called scientists.” — Dalia Shanshal [0:11:24]“I think it is very important for data scientists to keep up to date with the science. Whenever I am [faced] with a problem, I start by researching what is out there.” — Dalia Shanshal [0:22:18]“One of the things that is really important to me is making sure that whatever [data scientists] are doing has an impact.” — Dalia Shanshal [0:33:50]Links Mentioned in Today’s Episode:Dalia ShanshalDalia Shanshal on LinkedInDalia Shanshal on GitHubDalia Shanshal EmailBellGeekFest 2023 | BellCanadian Conference on Artificial Intelligence (CANAI)‘Towards an Automated Framework of Root Cause Analysis in the Canadian Telecom Industry’Ohm Dome ProjectHow AI HappensSama
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.