How AI Happens cover image

How AI Happens

Latest episodes

undefined
Jul 12, 2023 • 28min

Declarative ML with Ludwig Creator & Predibase CEO & Co-Founder Piero Molino

Low-code platforms provide a powerful and efficient way to develop applications and drive digital transformation and are becoming popular tools for organizations. In today’s episode, we are joined by Piero Molino, the CEO, and Co-Founder at Predibase, a company revolutionizing the field of machine learning by pioneering a low-code declarative approach. Predibase empowers engineers and data scientists to effortlessly construct, enhance, and implement cutting-edge models, ranging from linear regressions to expansive language models, using a mere handful of code lines. Piero is intrigued by the convergence of diverse cultural interests and finds great fascination in exploring the intricate ties between knowledge, language, and learning. His approach involves seeking unconventional solutions to problems and embracing a multidisciplinary approach that allows him to acquire novel and varied knowledge while gaining fresh experiences. In our conversation, we talk about his professional career journey, developing Ludwig, and how this eventually developed into Predibase. Key Points From This Episode:Background about Piero’s professional experience and skill sets.What his responsibilities were in his previous role at Uber.Hear about his research at Stanford University.Details about the motivation for Predibase: Ludwig AI. Examples of the different Ludwig models and applications.Challenges of software development.How the community further developed his Ludwig machine learning tool.The benefits of community involvement for developers.Hear how his Ludwig project developed into Predibase.He shares the inspiration behind the name Ludwig.Why Predibase can be considered a low-code platform.What the Predibase platform offers users and organizations.Ethical considerations of democratizing data science tools.The importance of a multidisciplinary approach to developing AI tools.Advice for upcoming developers.Tweetables:“One thing that I am proud of is the fact that the architecture is very extensible and really easy to plug and play new data types or new models.” — @w4nderlus7 [0:14:02]“We are doing a bunch of things at Predibase that build on top of Ludwig and make it available and easy to use for organizations in the cloud.” — @w4nderlus7 [0:19:23]“I believe that in the teams that actually put machine learning into production, there should be a combination of different skill sets.” — @w4nderlus7 [0:23:04]“What made it possible for me to do the things that I have done is constant curiosity.” — @w4nderlus7 [0:26:06]Links Mentioned in Today’s Episode:Piero Molino on LinkedInPiero Molino on TwitterPredibaseLudwigMax-Planck-InstituteLoopr AIWittgenstein's MistressHow AI HappensSama
undefined
Jun 30, 2023 • 35min

dRISK CEO Chess Stetson & COO Rav Babbra

dRisk uses a unique approach to increasing AV safety: collecting real-life scenarios and data from accidents, insurance reports, and more to train autonomous vehicles on extreme edge cases. With their advanced simulation tool, they can accurately recreate and test these scenarios, allowing AV developers to improve the performance and safety of their vehicles. Join us as Chess and Rav delve into the exciting world of AVs and the challenges they face in creating safer and more efficient transportation systems.Key Points From This Episode:Introducing dRisk Founder and CEO, Chess Stetson, and COO, Rav Babbra.dRisk’s mission to help autonomous vehicles become better drivers than humans.The UK government’s interest in autonomous vehicles to solve transportation problems.Rav’s career background; how the CAVSim competition put dRisk on his radar.How dRisk’s software presents real-life scenarios and extreme edge cases to test AVs.Chess defines extreme edge cases in the AV realm and explains where AVs typically go wrong.How the company uses natural language processing and AI-based techniques to improve simulation accuracy for AV testing.The metrics used to ensure the accuracy of the simulations.What makes AI different from humans in an AV context.The benchmark for the capability of AVs; the tolerance for human driver error versus AV error.Why third-party testing is a necessity for AI.dRisk’s assessment process for autonomous vehicles.The delicate balance between innovation and regulation.Examples of AV edge cases.Tweetables:“At the time, no autonomous vehicles could ever actually drive on the UK's roads. And that's where Chess and the team at dRisk have done such great piece of work.” — Rav Babbra [0:07:25]“If you've got an unprotected cross-traffic turn, that's where a lot of things traditionally go wrong with AVs.” —Chess Stetson [0:08:45]“We can, in an automated way, map out metrics for what might or might not constitute a good test and cut out things that would be something like a hallucination.” —Chess Stetson [0:13:59]“The thing that makes AI different than humans is that if you have a good driver's test for an AI, it's also a good training environment for an AI. That's different [from] humans because humans have common sense.” — Chess Stetson [0:15:10]“If you can really rigorously test [AI] on its ability to have common sense, you can also train it to have a certain amount of common sense.” — Chess Stetson [0:15:51]“The difference between an AI and a human is that if you had a good test, it's equivalent to a good training environment.” — Chess Stetson [0:16:29]“I personally think it's not unrealistic to imagine AV is getting so good that there's never a death on the road at all.” — Chess Stetson [0:18:50]“One of the reasons that we're in the UK is precisely because the UK is going to have no tolerance for autonomous vehicle collisions.” — Chess Stetson [0:20:08]“Now, there's never a cow in the highway here in the UK, but of course, things do fall off lorries. So if we can train against a cow sitting on the highway, then the next time a grand piano falls off the back of a truck, we've got some training data at least that helps it avoid that.” — Rav Babbra [0:35:12]“If you target the worst case scenario, everything underneath, you've been able to capture and deal with.” — Rav Babbra [0:36:08]Links Mentioned in Today’s Episode:Chess StetsonChess Stetson on LinkedInRav Babbra on LinkedIndRISKHow AI HappensSama
undefined
Jun 15, 2023 • 27min

Stantec GenerationAV Founder Corey Clothier

 In this episode, we learn about the common challenges companies face when it comes to developing and deploying their AV and how Stantec uses military and aviation best practices to remove human error and ensure safety and reliability in AV operations. Corey explains the importance of collecting edge cases and shares his take on why the autonomous mobility industry is so meaningful. Key Points From This Episode:Introducing Autonomous Mobility Strategist and Stantec GenerationAV Founder Corey Clothier.Corey breaks down his typical week.Applications for autonomously mobile wheelchairs.Corey’s experience working in robotics for the Department of Defense.The state of autonomy back in 2009 and 2010.Corey’s definition of commercialization.Why there’s less forgiveness for downtime with autonomous vehicles than human-operated vehicles.How people’s attitudes around autonomy and robotics differ in different parts of the world.The sensationalism around autonomous vehicle “crashes.”Stantec’s approach to measuring and assessing the safety and risk of autonomous vehicles. Why it’s so crucial to collect edge cases and how solving for them is applied downstream.The common challenges companies face when it comes to deploying and developing their AV.How Stantec uses military and aviation best practices to remove human error in AV operations.The advantages of and opportunities behind AV.Advice for those hoping to forge an impactful career in autonomous vehicles.Tweetables:“For me, [commercialization] is a safe and reliable service that actually can perform the job that it's supposed to.” — @coreyclothier [0:07:04]“Most of the autonomous vehicles that I've been working with, even since the beginning, most of them are pretty safe.” — @coreyclothier [0:08:01]“When you start to talk to people from around the world, they absolutely have different attitudes related to autonomy and robotics.” — @coreyclothier [0:09:20]“What's exciting though is about dRISK [is] it gives us a quantifiable risk measure, something that we can look at as a baseline and then something we can see as we make improvements and do mitigation strategies.” — @coreyclothier [0:17:18]“The common challenges really are being able to handle all the edge cases in the operating environment that they're going to deploy.” — @coreyclothier [0:20:41] Links Mentioned in Today’s Episode:Corey Clothier on LinkedInCorey Clothier on TwitterStantecdRISKHow AI HappensSama
undefined
May 11, 2023 • 31min

Credit Karma VP Engineering Vishnu Ram

Vishnu provides valuable advice for data scientists who want to help create high-quality data that can be used effectively to impact business outcomes. Tune in to gain insights from Vishnu's extensive experience in engineering leadership and data technologies.Key Points From This Episode:An introduction to Vishnu Ram, his background, and how he came to Credit Karma. His prior exposure to AI in the form of fuzzy logic and neural networks.What Credit Karma needed to do before the injection of AI into its data functions. The journey of building Credit Karma into the data science operation that it is. Challenges of building the models in time so the data isn’t outdated by the time it can be used.The nature of technical debtHow compensating for technical debt with people or processes is different from normal business growth.The current data culture of Credit Karma.Some pros and cons of a multi-team approach when introducing new platforms or frameworks.The process of adopting TensorFlow and injecting it in a meaningful way.How they mapped the need for this new model to a business use case and the internal education that was needed to make this change. Insight into the shift from being an individual contributor into a management position with organization-wide challenges.Advice to data scientists wanting to help to create a data culture that results in clean, usable, high-quality data.Tweetables:“One of the things that we always care about [at Credit Karma] is making sure that when you are recommending any financial products in front of the users, we provide them with a sense of certainty.” — Vishnu Ram [0:05:59]“One of the big things that we had to do, pretty much right off the bat, was make sure that our data scientists were able to get access to the data at scale — and be able to build the models in time so that the model maps to the future and performs well for the future.” — Vishnu Ram [0:08:00]“Whenever we want to introduce new platforms or frameworks, both the teams that own that framework as well as the teams that are going to use that framework or platform would work together to build it up from scratch.” — Vishnu Ram [0:15:11]“If your consumers have done their own research, it’s a no-brainer to start including them because they’re going to help you see around the corner and make sure you're making the right decisions at the right time.” — Vishnu Ram [0:16:43]Links Mentioned in Today’s Episode:Vishnu RamCredit KarmaTensorFlowTFX: A TensorFlow-Based Production-Scale Machine Learning Platform [19:15] How AI HappensSama
undefined
May 4, 2023 • 36min

Vector Search with Algolia CTO Sean Mullaney

Algolia is an AI-powered search and discovery platform that helps businesses deliver fast, personalized search experiences.  In our conversation, Sean shares what ignited his passion for AI and how Algolia is using AI to deliver lightning-fast custom search results to each user. He explains how Algolia's AI algorithms learn from user behavior and talks about the challenges and opportunities of implementing AI in search and discovery processes. We discuss improving the user experience through AI, why technologies like ChatGPT are disrupting the market, and how Algolia is providing innovative solutions. Learn about “hashing,” the difference between keyword and vector searches, the company’s approach to ranking, and much more. Key Points From This Episode:Learn about Sean’s professional journey and previous experience working with AI and e-commerce.Discover why Sean is so passionate about the technology industry and how he was able to see gaps within the e-commerce user experience.Gain insights into the challenges currently facing search engines and why it's not just about how you ask the search engine but also about how it responds.Get an overview of how Algolia's search algorithm differs from the rest and how it trains results on context to deliver lightning-fast, relevant results.Learn about the problems with vectors and how Algolia is using AI to revolutionize the search and discovery process.Sean explains Algolia's approach to ranking search results and shares details about Algolia's new decompression algorithm.Discover how Algolia's breakthroughs were inspired by different fields like biology and the problems facing search engine optimization for the e-commerce sector.Find out when users can expect to see Algolia's approach to search outside of the e-commerce experience.Tweetables:“Well, the great thing is that every 10 years the entire technology industry changes, so there is never a shortage of new technology to learn and new things to build.” — Sean Mullaney [0:05:08]“It is not just the way that you ask the search engine the question, it is also the way the search engine responds regarding search optimization.” — Sean Mullaney [0:08:04]Links Mentioned in Today’s Episode:Sean Mullaney on LinkedInAlgoliaChatGPTHow AI HappensSama
undefined
Apr 13, 2023 • 38min

Assessing Computer Vision Models with Roboflow's Piotr Skalski

Today’s guest is a Developer Advocate and Machine Learning Growth Engineer at Roboflow who has the pleasure of providing Roboflow users with all the information they need to use computer vision products optimally. In this episode, Piotr shares an overview of his educational and career trajectory to date; from starting out as a civil engineering graduate to founding an open source project that was way ahead of its time to breaking the million reader milestone on Medium. We also discuss Meta’s Segment Anything Model, the value of packaged models over non-packaged ones, and how computer vision models are becoming more accessible. Key Points From This Episode:What Piotr’s current roles at Roboflow entail.An overview of Piotr’s educational and career journey to date.The Medium milestone that Piotr recently achieved.The motivation behind Piotr’s open source project, Make Sense (and the impact it has had). Piotr’s approach to assessing computer vision models. The issue of lack of support in the computer vision space. Why Piotr is an advocate of packaged models. What makes Meta’s Segment Anything Model so novel and exciting. An example that highlights how computer vision models are becoming more accessible. Piotr’s thoughts about the future potential of ChatGPT.Tweetables:“Not only [do] I showcase [computer vision] models but I also show people how to use them to solve some frequent problems.” — Piotr Skalski [0:10:14]“I am always a fan of models that are packaged.” — Piotr Skalski [0:15:58]“We are drifting towards a direction where users of those models will not necessarily have to be very good at computer vision to use them and create complicated things.” — Piotr Skalski [0:32:15]Links Mentioned in Today’s Episode:Piotr Skalski on LinkedInPiotr Skalski on MediumMake SenseRoboflowSegment Anything by Meta AIHow to Use the Segment Anything ModelHow AI HappensSama
undefined
Mar 30, 2023 • 32min

DataRobot's Global AI Ethicist Haniyeh Mahmoudian, Ph.D

In our conversation, we learn about her professional journey and how this led to her working at DataRobot, what she realized was missing from the DataRobot platform, and what she did to fill the gap. We discuss the importance of bias in AI models, approaches to mitigate models against bias, and why incorporating ethics into AI development is essential. We also delve into the different perspectives of ethical AI, the elements of trust, what ethical “guard rails” are, and the governance side of AI. Key Points From This Episode:Dr. Mahmoudian shares her professional background and her interest in AI.How Dr. Mahmoudian became interested in AI ethics and building trustworthy AI.What she hopes to achieve with her work and research. Hear practical examples of how to build ethical and trustworthy AI.We unpack the ethical and trustworthy aspects of AI development.What the elements of trust are and how to implement them into a system.An overview of the different essential processes that must be included in a model.How to mitigate systems from bias and the role of monitoring.Why continual improvement is key to ethical AI development.Find out more about DataRobot and Dr. Mahmoudian’s multiple roles at the company.She explains her approach to working with customers.Discover simple steps to begin practicing responsible AI development.Tweetables:“When we talk about ‘guard rails’ sometimes you can think of the best practice type of ‘guard rails’ in data science but we should also expand it to the governance and ethics side of it.” — @HaniyehMah [0:11:03]“Ethics should be included as part of [trust] to truly be able to think about trusting a system.” — @HaniyehMah [0:13:15]“[I think of] ethics as a sub-category but in a broader term of trust within a system.” — @HaniyehMah [0:14:32]“So depending on the [user] persona, we would need to think about what kind of [system] features we would have .” — @HaniyehMah [0:17:25]Links Mentioned in Today’s Episode:Haniyeh Mahmoudian on LinkedInHaniyeh Mahmoudian on TwitterDataRobotNational AI Advisory CommitteeHow AI HappensSama
undefined
Mar 16, 2023 • 26min

Data Scientist & Developer Advocate Kristen Kehrer

Kristen is also the founder of Data Moves Me, a company that offers courses, live training, and career development. She  hosts The Cool Data Projects Show, where she interviews AI, machine learning (ML), and deep learning (DL) experts about their projects. Points From This Episode:Kristen’s background in the data science world and what led her to her role at Comet.What it means to be a developer advocate and build community.Some of the coolest AI, ML, and DL ideas from The Cool Data Projects Show!One of the computer vision projects Kristen is working on that uses Kaggle datasets.How Roboflow can help you deploy a computer vision model in an afternoon.The amount of data that is actually needed for object detection.Solving the challenge of contextualization for computer vision models.A look at attention mechanisms in explainable AI and how to tackle large datasets.Insight into the motivations behind Kristen’s school bus project.The value of learning through building and solving “real” problems.How Kristen’s background as a data scientist lends itself to computer vision.Free and easily-available resources that others in the space have created to assist you.Advice for those forging their own careers: get involved in the community!Tweetables:“I’m finding people who are working on really cool things and focusing on the methodology and approach. I want to know: how did you collect your data? What algorithm are you using? What algorithms did you consider? What were the challenges that you faced?” — @DataMovesHer [0:05:55]“A lot of times, it comes back to [the fact that] more data is always better!” — @DataMovesHer [0:15:40]“I like [to do computer vision] projects that allow me to solve a problem that is actually going on in my life. When I do one, suddenly, it becomes a lot easier to see other ways that I can make other parts of my life easier.” — @DataMovesHer [0:18:59]“The best thing you can do is to get involved in the community. It doesn’t matter whether that community is on Reddit, Slack, or LinkedIn.” — @DataMovesHer [0:23:32]Links Mentioned in Today’s Episode:Data Moves MeCometThe Cool Data Projects ShowMothers of Data ScienceKristen Kehrer on LinkedInKristen Kehrer on TwitterKristen Kehrer on InstagramKristen Kehrer on YouTubeKristen Kehrer on TikTokKaggleRoboflowKangas LibraryHow AI HappensSama
undefined
Mar 1, 2023 • 36min

Blue Collar AI with Kirk Borne Ph.D

 In this episode, we learn the benefits of blue-collar AI education and the role of company culture in employee empowerment. Dr. Borne shares the history of data collection and analysis in astronomy and the evolution of cookies on the internet and explains the concept of Web3 and the future of data ownership. Dr. Borne is of the opinion that AI serves to amplify and assist people in their jobs rather than replace them and in our conversation, we discover how everyone can benefit if adequately informed.Key Points From This Episode:Data scientist and astrophysicist, Dr. Kirk Borne’s vast background.The history of data collection and analysis in astronomy.How Dr. Borne fulfills his passion for educating others.DataPrime’s blue-collar AI education course.How AI amplifies your work without replacing it.The difference between efficiency and effectiveness.The difference between educating blue-collar students and graduate students.The goal of the blue-collar AI course. The ways in which automation and digital transformation are changing jobs.Comparison between the AI revolution (the fourth industrial revolution) and previous industrial revolutions.The role of company culture in employee empowerment.Dr. Borne’s approach to teaching AI education.Dr. Borne shares a humorous Richard Feynman anecdote.The concept of Web3 and the future of data ownership.The history and evolution of cookies on the internet.The ethical concerns of AI.Tweetables:“[AI] amplifies and assists you in your work. It helps automate certain aspects of your work but it’s not really taking your work away. It’s just making it more efficient, or more effective.” — @KirkDBorne [0:11:18]“There’s a difference between efficiency and effectiveness … Efficiency is the speed at which you get something done and effective means the amount that you can get done.” — @KirkDBorne [0:11:29]“There are different ways that automation and digital transformation are changing a lot of jobs. Not just the high-end professional jobs, so to speak, but the blue-collar gentlemen.” — @KirkDBorne [0:18:06]“What we’re trying to achieve with this blue-collar AI is for people to feel confident with it and to see where it can bring benefits to their business.” — @KirkDBorne [0:24:08]“I have yet to see an auto-complete come over your phone and take over the world.” — @KirkDBorne [0:26:56]Links Mentioned in Today’s Episode:Kirk Borne, Ph.D.Kirk Borne, Ph.D. on LinkedInKirk Borne, Ph.D. on TwitterRichard FeynmanJennyCoAlchemy ExchangeBooz Allen HamiltonDataPrimeHow AI HappensSama
undefined
Feb 23, 2023 • 33min

Training Biometric Tech with Head of AI George Williams

Goodbye Passwords, Hello Biometrics with George WilliamsEpisode 61: Show Notes.Is it really safer to have a system know your biometrics rather than your password? If so, who do you trust with this data? George Williams, a silicon valley tech veteran who most recently served as Head of AI at SmileIdentity, is passionate about machine learning, mathematics, and data science. In this episode, George shares his opinions on the dawn of AI, how long he believes AI has been around, and references the ancient Greeks to show the relationship between the current fifth big wave of AI and the genesis of it all. Focusing on the work done by SmileIdentity, you will understand the growth of AI in Africa, what and how biometrics works, and the mathematical vulnerabilities in machine learning. Biometrics is substantially more complex than password authentication, and George explains why he believes this is the way of the future.Key Points From This Episode:Georges's opinions on the genesis of AI.The link between robotics and AI.The technology and ideas of the Ancient Greeks, in the time of Aristotle.George’s career past: software engineer versus mathematics.What George’s role is within SmileIdentity.How Africa is skipping passwords and going into advanced biometrics.How George uses biometrics in his everyday life,Quantum supremacy: how it works and its implications.George’s opinions on conspiracy theories about the government having personal information.Why understanding the laws and regulations of technology is important.The challenges of data security and privacy.Some ethical, unbiased questions about biometrics, mass surveillance, and AI.George explains ‘garbage in, garbage out’ and how it relates to machine learning.How SmileIdentity is ensuring ethnic diversity and accuracy.How to measure an unbiased algorithm.Why machine learning is a life cycle. The fraud detection technology in SmileIdentity biometric security.The shift of focus in machine learning and cyber security.Tweetables:“Robotics and artificial intelligence are very much intertwined.” — @georgewilliams [0:02:14]“In my daily routine, I leverage biometrics as much as possible and I prefer this over passwords when I can do so.” — @georgewilliams [0:08:13]“All of your data is already out there in one form or another.” — @georgewilliams [0:10:38]“We don’t all need to be software developers or ML engineers, but we all have to understand the technology that is powering [the world] and we have to ask the right questions.” — @georgewilliams [0:11:53]“[Some of the biometric] technology is imperfect in ways that make me uncomfortable and this technology is being deployed at massive scale in parts of the world and that should be a concern for all of us.” — @georgewilliams [0:20:33]“In machine learning, once you train a model and deploy it you are not done. That is the start of the life cycle of activity that you have to maintain and sustain in order to have really good AI biometrics.” — @georgewilliams [0:22:06]Links Mentioned in Today’s Episode:George Williams on TwitterGeorge Williams on LinkedInSmileIdentityNYU Movement LabChatGPTHow AI HappensSama

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner