How AI Happens cover image

How AI Happens

Latest episodes

undefined
Aug 17, 2023 • 29min

Veritone Head of Product & Engineering Chris Doe

Creating AI workflows can be a challenging process. And while purchasing these types of technologies may be straightforward, implementing them across multiple teams is often anything but. That’s where a company like Veritone can offer unparalleled support. With over 400 AI engines on their platform, they’ve created a unique operating system that helps companies orchestrate AI workflows with ease and efficacy.  Chris discusses the differences between legacy and generative AI, how LLMs have transformed chatbots, and what you can do to identify potential AI use cases within an organization. AI innovations are taking place at a remarkable pace and companies are feeling the pressure to innovate or be left behind, so tune in to learn more about AI applications in business and how you can revolutionize your workflow!Key Points From This Episode:An introduction to Chris Doe, Product Management Leader at Veritone.How Veritone is helping clients orchestrate their AI workflows.The four verticals Chris oversees: media, entertainment, sports, and advertising.Building solutions that infuse AI from beginning to end.An overview of the type of AI that Veritone is infusing.How they are helping their clients navigate the expansive landscape of cognitive engines.Fine-tuning generative AI to be use-case-specific for their clients.Why now is the time to be testing and defining proof of concept for generative AI.How LLMs have transformed chatbots to be significantly more sophisticated.Creating bespoke chatbots for clients that can navigate complex enterprise applications.The most common challenges clients face when it comes to integrating AI applications.Chris’s advice on taking stock of an organization and figuring out where to apply AI.Tips on how to identify potential AI use cases within an organization.Quotes:“Anybody who's writing text can leverage generative AI models to make their output better.” — @chris_doe [0:05:32]“With large language models, they've basically given these chatbots a whole new life.” — @chris_doe [0:12:38]“I can foresee a scenario where most enterprise applications will have an LLM power chatbot in their UI.” — @chris_doe [0:13:31]“It's easy to buy technology, it's hard to get it adopted across multiple teams that are all moving in different directions and speeds.” — @chris_doe [0:21:16]“People can start new companies and innovate very quickly these days. And the same has to be true for large companies. They can't just sit on their existing product set. They always have to be innovating.” — @chris_doe [0:23:05]“We just have to identify the most problematic part of that workflow and then solve it.” — @chris_doe [0:26:20]Links Mentioned in Today’s Episode:Chris Doe on LinkedInChris Doe on XVeritoneHow AI HappensSama
undefined
Aug 11, 2023 • 34min

Microsoft Technical Strategist Valeria Sadovykh, PhD

AI is an incredible tool that has allowed us to evolve into more efficient human beings. But, the lack of ethical and responsible design in AI can lead to a level of detachment from real people and authenticity. A wonderful technology strategist at Microsoft, Valeria Sadovykh, joins us today on How AI Happens. Valeria discusses why she is concerned about AI tools that assist users in decision-making, the responsibility she feels these companies hold, and the importance of innovation. We delve into common challenges these companies face in people, processes, and technology before exploring the effects of the democratization of AI. Finally, our guest shares her passion for emotional AI and tells us why that keeps her in the space. To hear it all, tune in now!Key Points From This Episode:An introduction to today’s guest, Valeria Sadovykh. Valeria tells us about her studies at the University of Auckland and her Ph.D. The problems with using the internet to assist in decision making. How ethical and responsible AI frames Valeria’s career. What she is doing to encourage AI leaders to prioritize responsible design. The dangers of lack of authenticity, creativity, and emotion in AI. Whether we need human interaction or not and if we want to preserve it. What responsibility companies developing this technology have, according to Valeria. She tells us about her job at Microsoft and what large organizations are doing to be ethical. What kinds of AI organizations need to be most conscious of ethics and responsible design.Other common challenges companies face when they plug in other technology.How those challenges show up in people, processes, and technology when deploying AI.Why Valeria expects some costs to decrease as AI technology democratizes over time.The importance of innovating and being prepared to (potentially) fail. Why the future of emotional AI and the ability to be authentic fascinates Valeria. Tweetables:“We have no opportunity to learn something new outside of our predetermined environment.” — @ValeriaSadovykh [0:07:07]“[Ethics] as a concept is very difficult to understand because what is ethical for me might not necessarily be ethical for you and vice versa.” — @ValeriaSadovykh [0:11:38]“Ethics – should not come – [in] place of innovation.” — @ValeriaSadovykh [0:20:13]“Not following up, not investing, not trying, [and] not failing is also preventing you from success.” — @ValeriaSadovykh [0:29:52]Links Mentioned in Today’s Episode:Valeria Sadovykh on LinkedInValeria Sadovykh on InstagramValeria Sadovykh on TwitterHow AI HappensSama
undefined
Aug 9, 2023 • 26min

Gradient Ventures Founder Anna Patterson

 Key Points From This Episode:She shares her professional journey that eventually led to the founding of Gradient Ventures.How Anna would contrast AI Winter to the standard hype cycles that exist.Her thoughts on how the web and mobile sectors were under-hyped.Those who decide if something falls out of favor; according to Anna.How Anna navigates hype cycles.Her process for evaluating early-stage AI companies. How to assess whether someone is a tourist or truly committed to something.Approaching problems and discerning whether AI is the right answer.Her thoughts on the best application for AI or MLR technology. Anna shares why she is excited about large language models (LLMs).Thoughts on LLMs and whether we should or can we approach AGIs.A discussion: do we limit machines when we teach them to speak the way we speak?Quality AI and navigating fairness: the concept of the Human in the Loop.Boring but essential data tasks: whose job is that?How she feels about sensationalism.  What gets her fired up when it is time to support new companies. Advice to those forging careers in the AI and ML space. Tweetables:“When that hype cycle happens, where it is overhyped and falls out of favor, then generally that is – what is called a winter.” — @AnnapPatterson [0:03:28]“No matter how hyped you think AI is now, I think we are underestimating its change.” — @AnnapPatterson [0:04:06]“When there is a lot of hype and then not as many breakthroughs or not as many applications that people think are transformational, then it starts to go through a winter.” — @AnnapPatterson [0:04:47]@AnnapPatterson [0:25:17]Links Mentioned in Today’s Episode:Anna Patterson on LinkedIn‘Eight critical approaches to LLMs’‘The next programming language is English’‘The Advice Taker’GradientHow AI HappensSama
undefined
Jul 28, 2023 • 35min

Wayfair Director of Machine Learning Tulia Plumettaz

Wayfair uses AI and machine learning (ML) technology to interpret what its customers want, connect them with products nearby, and ensure that the products they see online look and feel the same as the ones that ultimately arrive in their homes. With a background in engineering and a passion for all things STEM, Wayfair’s Director of Machine Learning, Tulia Plumettaz, is an innate problem-solver. In this episode, she offers some insight into Wayfair’s ML-driven decision-making processes, how they implement AI and ML for preventative problem-solving and predictive maintenance, and how they use data enrichment and customization to help customers navigate the inspirational (and sometimes overwhelming) world of home decor. We also discuss the culture of experimentation at Wayfair and Tulia’s advice for those looking to build a career in machine learning.Key Points From This Episode:A look at Tulia’s engineering background and how she ended up in this role at Wayfair.Defining operations research and examples of its real-life applications.What it means for something to be strategy-proof.Different ways that AI and ML are being integrated at Wayfair.The challenge of unstructured data and how Wayfair takes the onus off suppliers.Wayfair’s North Star: detecting anomalies before they’re exposed to customers.Preventative problem-solving and how Wayfair trains ML models to “see around corners.”Examples of nuanced outlier detection and whether or not ML applications would be suitable.Insight into Wayfair’s bespoke search tool and how it interprets customers’ needs.The exploit-and-explore model Wayfair uses to measure success and improve accordingly.Tulia’s advice for those forging a career in machine learning: go back to first principles!Tweetables:“[Operations research is] a very broad field at the intersection between mathematics, computer science, and economics that [applies these toolkits] to solve real-life applications.” — Tulia Plumettaz [0:03:42]“All the decision making, from which channel should I bring you in [with] to how do I bring you back if you’re taking your sweet time to make a decision to what we show you when you [visit our site], it’s all [machine learning]-driven.” — Tulia Plumettaz [0:09:58]“We want to be in a place [where], as early as possible, before problems are even exposed to our customers, we’re able to detect them.” — Tulia Plumettaz [0:18:26]“We have the challenge of making you buy something that you would traditionally feel, sit [on], and touch virtually, from the comfort of your sofa. How do we do that? [Through the] enrichment of information.” — Tulia Plumettaz [0:29:05]“We knew that making it easier to navigate this very inspirational space was going to require customization.” — Tulia Plumettaz [0:29:39]“At its core, it’s an exploit-and-explore process with a lot of hypothesis testing. Testing is at the core of [Wayfair] being able to say: this new version is better than [the previous] version.” — Tulia Plumettaz [0:31:53]Links Mentioned in Today’s Episode:Tulia Plumettaz on LinkedInWayfairHow AI HappensSama
undefined
Jul 19, 2023 • 26min

FreeWheel's VP of Data Science Bob Bress

Bob highlights the importance of building interdepartmental relationships and growing a talented team of problem solvers, as well as the key role of continuous education. He also offers some insight into the technical and not-so-technical skills of a “data science champion,” tips for building adaptable data infrastructures, and the best career advice he has ever received, plus so much more. For an insider’s look at the data science operation at FreeWheel and valuable advice from an analytics leader with more than two decades of experience, be sure to tune in today!Key Points From This Episode:A high-level overview of FreeWheel, Bob’s role there, and his career trajectory thus far.Important intersections between data science and the organization at large.Three indicators that FreeWheel is a data-driven company.Why continuous education is a key component for agile data science teams.The interplay between data science and the development of AI technology.Technical (and other) skills that Bob looks for when recruiting new talent to his team.Bob’s perspective on the value of interdepartmental collaboration.Insight into what an adaptable data infrastructure looks like.The importance of asking yourself, “What more can we do?”Tweetables:“As a data science team, it’s not enough to be able to solve quantitative problems. You have to establish connections to the company in a way that uncovers those problems to begin with.” — @Bob_Bress [0:06:42]“The more we can do to educate folks – on the type of work that the [data science] team does, the better the position we are in to tackle more interesting problems and innovate around new ideas and concepts.” — @Bob_Bress [0:09:49]“There are so many interactions and dependencies across any project of sufficient complexity that it’s only through [collaboration] across teams that you’re going to be able to hone in on the right answer.” — @Bob_Bress [0:17:34]“There is always more you can do to enhance the work you’re doing, other questions you can ask, other ways you can go beyond just checking a box.” — @Bob_Bress [0:23:31]Links Mentioned in Today’s Episode:Bob Bress on LinkedInBob Bress on TwitterFreeWheelHow AI HappensSama
undefined
Jul 12, 2023 • 28min

Declarative ML with Ludwig Creator & Predibase CEO & Co-Founder Piero Molino

Low-code platforms provide a powerful and efficient way to develop applications and drive digital transformation and are becoming popular tools for organizations. In today’s episode, we are joined by Piero Molino, the CEO, and Co-Founder at Predibase, a company revolutionizing the field of machine learning by pioneering a low-code declarative approach. Predibase empowers engineers and data scientists to effortlessly construct, enhance, and implement cutting-edge models, ranging from linear regressions to expansive language models, using a mere handful of code lines. Piero is intrigued by the convergence of diverse cultural interests and finds great fascination in exploring the intricate ties between knowledge, language, and learning. His approach involves seeking unconventional solutions to problems and embracing a multidisciplinary approach that allows him to acquire novel and varied knowledge while gaining fresh experiences. In our conversation, we talk about his professional career journey, developing Ludwig, and how this eventually developed into Predibase. Key Points From This Episode:Background about Piero’s professional experience and skill sets.What his responsibilities were in his previous role at Uber.Hear about his research at Stanford University.Details about the motivation for Predibase: Ludwig AI. Examples of the different Ludwig models and applications.Challenges of software development.How the community further developed his Ludwig machine learning tool.The benefits of community involvement for developers.Hear how his Ludwig project developed into Predibase.He shares the inspiration behind the name Ludwig.Why Predibase can be considered a low-code platform.What the Predibase platform offers users and organizations.Ethical considerations of democratizing data science tools.The importance of a multidisciplinary approach to developing AI tools.Advice for upcoming developers.Tweetables:“One thing that I am proud of is the fact that the architecture is very extensible and really easy to plug and play new data types or new models.” — @w4nderlus7 [0:14:02]“We are doing a bunch of things at Predibase that build on top of Ludwig and make it available and easy to use for organizations in the cloud.” — @w4nderlus7 [0:19:23]“I believe that in the teams that actually put machine learning into production, there should be a combination of different skill sets.” — @w4nderlus7 [0:23:04]“What made it possible for me to do the things that I have done is constant curiosity.” — @w4nderlus7 [0:26:06]Links Mentioned in Today’s Episode:Piero Molino on LinkedInPiero Molino on TwitterPredibaseLudwigMax-Planck-InstituteLoopr AIWittgenstein's MistressHow AI HappensSama
undefined
Jun 30, 2023 • 35min

dRISK CEO Chess Stetson & COO Rav Babbra

dRisk uses a unique approach to increasing AV safety: collecting real-life scenarios and data from accidents, insurance reports, and more to train autonomous vehicles on extreme edge cases. With their advanced simulation tool, they can accurately recreate and test these scenarios, allowing AV developers to improve the performance and safety of their vehicles. Join us as Chess and Rav delve into the exciting world of AVs and the challenges they face in creating safer and more efficient transportation systems.Key Points From This Episode:Introducing dRisk Founder and CEO, Chess Stetson, and COO, Rav Babbra.dRisk’s mission to help autonomous vehicles become better drivers than humans.The UK government’s interest in autonomous vehicles to solve transportation problems.Rav’s career background; how the CAVSim competition put dRisk on his radar.How dRisk’s software presents real-life scenarios and extreme edge cases to test AVs.Chess defines extreme edge cases in the AV realm and explains where AVs typically go wrong.How the company uses natural language processing and AI-based techniques to improve simulation accuracy for AV testing.The metrics used to ensure the accuracy of the simulations.What makes AI different from humans in an AV context.The benchmark for the capability of AVs; the tolerance for human driver error versus AV error.Why third-party testing is a necessity for AI.dRisk’s assessment process for autonomous vehicles.The delicate balance between innovation and regulation.Examples of AV edge cases.Tweetables:“At the time, no autonomous vehicles could ever actually drive on the UK's roads. And that's where Chess and the team at dRisk have done such great piece of work.” — Rav Babbra [0:07:25]“If you've got an unprotected cross-traffic turn, that's where a lot of things traditionally go wrong with AVs.” —Chess Stetson [0:08:45]“We can, in an automated way, map out metrics for what might or might not constitute a good test and cut out things that would be something like a hallucination.” —Chess Stetson [0:13:59]“The thing that makes AI different than humans is that if you have a good driver's test for an AI, it's also a good training environment for an AI. That's different [from] humans because humans have common sense.” — Chess Stetson [0:15:10]“If you can really rigorously test [AI] on its ability to have common sense, you can also train it to have a certain amount of common sense.” — Chess Stetson [0:15:51]“The difference between an AI and a human is that if you had a good test, it's equivalent to a good training environment.” — Chess Stetson [0:16:29]“I personally think it's not unrealistic to imagine AV is getting so good that there's never a death on the road at all.” — Chess Stetson [0:18:50]“One of the reasons that we're in the UK is precisely because the UK is going to have no tolerance for autonomous vehicle collisions.” — Chess Stetson [0:20:08]“Now, there's never a cow in the highway here in the UK, but of course, things do fall off lorries. So if we can train against a cow sitting on the highway, then the next time a grand piano falls off the back of a truck, we've got some training data at least that helps it avoid that.” — Rav Babbra [0:35:12]“If you target the worst case scenario, everything underneath, you've been able to capture and deal with.” — Rav Babbra [0:36:08]Links Mentioned in Today’s Episode:Chess StetsonChess Stetson on LinkedInRav Babbra on LinkedIndRISKHow AI HappensSama
undefined
Jun 15, 2023 • 27min

Stantec GenerationAV Founder Corey Clothier

 In this episode, we learn about the common challenges companies face when it comes to developing and deploying their AV and how Stantec uses military and aviation best practices to remove human error and ensure safety and reliability in AV operations. Corey explains the importance of collecting edge cases and shares his take on why the autonomous mobility industry is so meaningful. Key Points From This Episode:Introducing Autonomous Mobility Strategist and Stantec GenerationAV Founder Corey Clothier.Corey breaks down his typical week.Applications for autonomously mobile wheelchairs.Corey’s experience working in robotics for the Department of Defense.The state of autonomy back in 2009 and 2010.Corey’s definition of commercialization.Why there’s less forgiveness for downtime with autonomous vehicles than human-operated vehicles.How people’s attitudes around autonomy and robotics differ in different parts of the world.The sensationalism around autonomous vehicle “crashes.”Stantec’s approach to measuring and assessing the safety and risk of autonomous vehicles. Why it’s so crucial to collect edge cases and how solving for them is applied downstream.The common challenges companies face when it comes to deploying and developing their AV.How Stantec uses military and aviation best practices to remove human error in AV operations.The advantages of and opportunities behind AV.Advice for those hoping to forge an impactful career in autonomous vehicles.Tweetables:“For me, [commercialization] is a safe and reliable service that actually can perform the job that it's supposed to.” — @coreyclothier [0:07:04]“Most of the autonomous vehicles that I've been working with, even since the beginning, most of them are pretty safe.” — @coreyclothier [0:08:01]“When you start to talk to people from around the world, they absolutely have different attitudes related to autonomy and robotics.” — @coreyclothier [0:09:20]“What's exciting though is about dRISK [is] it gives us a quantifiable risk measure, something that we can look at as a baseline and then something we can see as we make improvements and do mitigation strategies.” — @coreyclothier [0:17:18]“The common challenges really are being able to handle all the edge cases in the operating environment that they're going to deploy.” — @coreyclothier [0:20:41] Links Mentioned in Today’s Episode:Corey Clothier on LinkedInCorey Clothier on TwitterStantecdRISKHow AI HappensSama
undefined
May 11, 2023 • 31min

Credit Karma VP Engineering Vishnu Ram

Vishnu provides valuable advice for data scientists who want to help create high-quality data that can be used effectively to impact business outcomes. Tune in to gain insights from Vishnu's extensive experience in engineering leadership and data technologies.Key Points From This Episode:An introduction to Vishnu Ram, his background, and how he came to Credit Karma. His prior exposure to AI in the form of fuzzy logic and neural networks.What Credit Karma needed to do before the injection of AI into its data functions. The journey of building Credit Karma into the data science operation that it is. Challenges of building the models in time so the data isn’t outdated by the time it can be used.The nature of technical debtHow compensating for technical debt with people or processes is different from normal business growth.The current data culture of Credit Karma.Some pros and cons of a multi-team approach when introducing new platforms or frameworks.The process of adopting TensorFlow and injecting it in a meaningful way.How they mapped the need for this new model to a business use case and the internal education that was needed to make this change. Insight into the shift from being an individual contributor into a management position with organization-wide challenges.Advice to data scientists wanting to help to create a data culture that results in clean, usable, high-quality data.Tweetables:“One of the things that we always care about [at Credit Karma] is making sure that when you are recommending any financial products in front of the users, we provide them with a sense of certainty.” — Vishnu Ram [0:05:59]“One of the big things that we had to do, pretty much right off the bat, was make sure that our data scientists were able to get access to the data at scale — and be able to build the models in time so that the model maps to the future and performs well for the future.” — Vishnu Ram [0:08:00]“Whenever we want to introduce new platforms or frameworks, both the teams that own that framework as well as the teams that are going to use that framework or platform would work together to build it up from scratch.” — Vishnu Ram [0:15:11]“If your consumers have done their own research, it’s a no-brainer to start including them because they’re going to help you see around the corner and make sure you're making the right decisions at the right time.” — Vishnu Ram [0:16:43]Links Mentioned in Today’s Episode:Vishnu RamCredit KarmaTensorFlowTFX: A TensorFlow-Based Production-Scale Machine Learning Platform [19:15] How AI HappensSama
undefined
May 4, 2023 • 36min

Vector Search with Algolia CTO Sean Mullaney

Algolia is an AI-powered search and discovery platform that helps businesses deliver fast, personalized search experiences.  In our conversation, Sean shares what ignited his passion for AI and how Algolia is using AI to deliver lightning-fast custom search results to each user. He explains how Algolia's AI algorithms learn from user behavior and talks about the challenges and opportunities of implementing AI in search and discovery processes. We discuss improving the user experience through AI, why technologies like ChatGPT are disrupting the market, and how Algolia is providing innovative solutions. Learn about “hashing,” the difference between keyword and vector searches, the company’s approach to ranking, and much more. Key Points From This Episode:Learn about Sean’s professional journey and previous experience working with AI and e-commerce.Discover why Sean is so passionate about the technology industry and how he was able to see gaps within the e-commerce user experience.Gain insights into the challenges currently facing search engines and why it's not just about how you ask the search engine but also about how it responds.Get an overview of how Algolia's search algorithm differs from the rest and how it trains results on context to deliver lightning-fast, relevant results.Learn about the problems with vectors and how Algolia is using AI to revolutionize the search and discovery process.Sean explains Algolia's approach to ranking search results and shares details about Algolia's new decompression algorithm.Discover how Algolia's breakthroughs were inspired by different fields like biology and the problems facing search engine optimization for the e-commerce sector.Find out when users can expect to see Algolia's approach to search outside of the e-commerce experience.Tweetables:“Well, the great thing is that every 10 years the entire technology industry changes, so there is never a shortage of new technology to learn and new things to build.” — Sean Mullaney [0:05:08]“It is not just the way that you ask the search engine the question, it is also the way the search engine responds regarding search optimization.” — Sean Mullaney [0:08:04]Links Mentioned in Today’s Episode:Sean Mullaney on LinkedInAlgoliaChatGPTHow AI HappensSama

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode