How AI Happens cover image

How AI Happens

Latest episodes

undefined
Apr 21, 2022 • 31min

GRID.ai's Lead AI Educator Sebastian Raschka

Joining us today on How AI Happens is Sebastian Raschka, Lead AI educator at GRID.ai and Assistant Professor of Statistics at the University of Wisconsin-Madison.  Sebastian fills us in on the coursework he’s creating in his role at GRID.ai, and we find out what can be attributed to the crossover of machine learning in academia and the private sector. We speculate on the pros and cons of the commodification of deep learning models and which machine learning framework is better: PyTorch or TensorFlow. Key Points From This Episode:Sebastian Raschka’s journey from the computation of biology to AI and machine learning.The focus of his current role as Lead AI educator at GRID.ai.The ideal applications and outcomes of the coursework Sebastian is developing.The crossover of machine learning in academia and the private sector; the theory versus the application.Deep learning versus machine learning and what constitutes a deep learning problem.The importance of sufficient data for deep learning to be effective.The applications of the BERT text model.The pros and cons of developing more accessible models.Why Sebastian set out to write Machine Learning with PyTorch and Scikit-Learn.The structure of the book, including theory and application.Why Sebastian prefers PyTorch over TensorFlow.What he finds most exciting in the current deep learning space.The emerging opportunities to use deep learning!Tweetables:“In academia, the focus is more on understanding how deep learning works… On the other hand, in the industry, there are [many] use cases of machine learning.” — @rasbt [0:10:10]“Often it is hard to formulate answers as a human to complex questions.” — @rasbt [0:12:53]“In my experience, deep learning can be very powerful but you need a lot of data to make it work well.” — @rasbt [0:14:06]“In [Machine Learning with PyTorch and Scikit-Learn], I tried to provide a resource that is a hybrid between more theoretical books and more applied books.” — @rasbt [0:23:21]“Why I like PyTorch is that it gives me the readability [and] flexibility to customize things.” — @rasbt [0:25:55]Links Mentioned in Today’s Episode:Sebastian RaschkaSebastian Raschka on TwitterGRID.aiMachine Learning with PyTorch and Scikit-Learn 
undefined
Apr 14, 2022 • 24min

Edge Computing & AI Readiness with Antonio Grasso

Antonio Grasso joins us to explain how he empowers some of the biggest companies in the world to use AI in a meaningful way and explains the two ways his company goes about this. You’ll hear about what Antonio believes is coming down the pipeline in terms of the Internet of Things, especially when it comes to edge computing, and why network traffic has become a huge concern. We discuss where edge computing begins and ends with regards to the difference between the device and its computational resources. In light of the fact that one can infer at the edge but not train at the edge, Antonio shares his views on why he disagrees that the ultimate goal should be to train at the edge. He also provides a helpful resource for AI practitioners to calculate an AI readiness index.Key Points From This Episode:How Antonio Grasso grew from a software engineer in the 80s to someone so influential in the world of AI today. How Antonio empowers some of the biggest companies to use AI in a meaningful way. The two faces of Antonio’s company Digital Business Innovation Srl. One of Antonio’s biggest achievements at IBM.How, as a technical expert, he helps people to understand what is possible with this technology and build trust. The challenges of maintaining the pace of innovation.What Antonio believes is coming down the pipeline in terms of Internet of Things especially when it comes to edge computing.Why network traffic has become a huge concern and the role of edge computing in this. What defines edge computing and the difference between the device and its computational resources.In light of the fact that one can infer at the edge but not train at the edge, Antonio’s views on whether or not the goal should be to train at the edge. Antonio’s advice to the AI practitioner out there on how you can calculate an AI readiness index. Tweetables:“‘Wow, this is really unbelievable! We can also create not [only] code software with direct explicit instruction, we can also [create] code software that learns from experience!’ That really [caught] me and I fell in love with this kind of technology.” — @antgrasso [0:03:13]“I started on Social Media to share my knowledge, my experience, because I think you must share what you see because everyone can benefit of it too.” — @antgrasso [0:03:39]“We need to shift to better understand what is the meaning of edge computing but we must divide the device itself from the computational resources that we put [there] to harness the power of computational power in proximity.” — @antgrasso [0:16:15]“I can not imagine training at the edge. — We can do it, yes, but my question is why?” — @antgrasso [0:20:50]Links Mentioned in Today’s Episode:Antonio Grasso  Digital Business Innovation SrlAI Singapore (AIRI Assessment)Antonio Grasso on TwitterHow AI Happens
undefined
Apr 7, 2022 • 38min

The Future of NLP with AI Professor, Vice Rector, & Researcher Aleksandra Przegalinska

Today's guest is Aleksandra Przegalinska PhD, Vice-Rector at Kozminski University, research associate, and Polish futurist. From studying pure philosophy, Aleksandra moved into AI when she started researching natural language processing in the virtual space. We kickstart our discussion with her account of how she ended up where she is now, and how she transferred her skills from philosophy to AI. We hear how Second Life was common in Asia centuries ago, why we are seeing a return to anonymization online, and why Aleksandra feels NLP should be called ‘natural language understanding’. We also discover what the real-world applications of NLP are, and why text processing is under-utilized. Moving onto more philosophical questions around AI and labor, Aleksandra explains how AI should be used to help people and why what is sometimes simple for a human can be immensely complex for AI. We wrap up with Aleksandra’s thoughts on transformers and why their applications are more important than their capabilities, as well as why she is so excited about the idea of xenobots. Key Points From This Episode:An introduction to today’s guest, Aleksandra Przegalinska, PhD.What Aleksandra is researching at the moment and how she ended up in academia. Insight into the link between her PhD topic and the Metaverse and the transfer of skills from a philosophy degree to AI. How a properly built digital ecosystem allows people freedom of expression and other takeaways from Aleksandra’s PhD experience. The return to online anonymization that we are currently seeing. Aleksandra’s experience of NLP in Second Life and how AI has altered the field. The role of NLP in Aleksandra’s work today and why she feels it should be called ‘natural language understanding’. The real-world applications of NLP: why text processing is under-utilized. Why people don't have to believe that programs are close to human.Why Aleksandra feels removing the need for manually annotated data should be a key focus in the field of AI.Tradeoffs between automation and human labor; why we should use AI to help humans first.How the challenges of automating tasks differ between fields from creative and marketing to calendar management. What Aleksandra thinks of the transformer arms race: why applications are more important than parameters. Why Aleksandra feels xenobots will change the world.Tweetables:“My major discovery [during my PhD] was that people are capable of building robust identities online and can live two lives. They can have their first life and then they can have their second life online, which can be very different from the one they pursue on-site, in the real world.” — @Przegaa [0:06:42]“We can all observe that there is a great boom in NLP. I’m not even sure we should call it NLP anymore. Maybe NLP is an improper phrase. Maybe it’s NLU: natural language understanding.” — @Przegaa [0:14:51]“Transformers seem to be a really big game-changer in the AI space.” — @Przegaa [0:16:40]“I think that using text as a resource for data analytics for businesses in the future is something that we will see happen in the coming two or three years.” — @Przegaa [0:19:46]“AI should not replace you, AI should help you at your work and make your work more effective but also more satisfying for you.” — @Przegaa [0:25:31]Links Mentioned in Today’s Episode:Aleksandra Przegalinska on LinkedInAlexandra Przegalinska on Twitter 
undefined
Mar 31, 2022 • 33min

Unpacking Facial Recognition Technology at CyberLink

CyberLink's facial recognition technology routinely registers best-in-class accuracy. But how do developers deal with masks, glasses, headphones, or changes in faces over time? How can they prevent spoofing in order to protect identities? And where does computer vision & object detection stop and FRT truly begin? CyberLink Senior Vice President of Global Marketing and US General Manager Richard Carriere and Head of Sales Engineering Craig Campbell join to discuss the endless use cases for facial recognition technology, how CyberLink is improving the tech's accuracy & security, and the ethical considerations of deploying FRT at scale.CyberLink's Ultimate Guide to Facial RecognitionFaceMe Security SDK DemoGet in touch with Cyberlink: FaceMe_US@cyberlink.com
undefined
Mar 24, 2022 • 33min

Universal Autonomy with Oxbotica VP of Technology Ben Upcroft

Oxbotica is a vehicle software company at the forefront of autonomous technology, and today we have a fascinating chat with Ben Upcroft, the Vice President of Technology. Ben explains Oxbotica's mission of enabling industries to make the most of autonomy, and how their technological progress affects real-world situations. We also get into some of the challenges that Oxbotica and the autonomy space, in general, are currently facing, before drilling down on the important concepts of user trust, future implementations, and creating an adaptable core functionality. The last part of today's episode is spent exploring the exciting possibilities of simulated environments for data collection, and the broadening of vehicle experience. Ben talks about the importance of seeking out edge cases to improve their data, and we get into how Oxbotica applies this data across locations. Key Points From This Episode:The constant joy and excitement that Ben feels about his work!An introduction to Oxbotica and its main mission as an organization. How the advances in autonomy translate into real-world progress in safety and efficiency. Handbrakes on the widespread implementation of more autonomy; Ben looks at current limitations. Facilitating trust in the public sphere for something new and the markers of progress. Oxbotica's array of vehicles and goals beyond basic transportation. Constant evolution and the question of staying on course with the rising tide of technology.  How generic features allow for an adaptable core functionality in Oxbotica's vehicles.  Applying data from different environments to boost performance across location types.How Oxbotica focuses on simulated edge cases as a means to broaden the capabilities of their technologies.The amount of real-world data that is necessary for accurate synthesis. Assessing the idea of quality over quantity when it comes to data for AI applications. The areas of the AI field that have Ben most excited right now; emulation of the human brain for the creation of new platforms! Tweetables:“Oxbotica is about deploying and enabling industries to use and leverage autonomy for performance, for efficiency, and safety gains.” — @ben_upcroft “The autonomy that we bring revolutionizes how we move around the globe, through logistics transport, on wheeled vehicles.” — @ben_upcroft “The idea behind the system is that it is modular, enables a core functionality, and I am able to add little extras that customize for a particular domain.” — @ben_upcroft Links Mentioned in Today’s Episode:Ben Upcroft on LinkedInOxboticaBen Upcroft on Twitter
undefined
Mar 17, 2022 • 34min

Upleveling Data Labeling with Sama's Jerome Pasquero

 Key Points From This Episode:Jerome’s background, interest in AI, and how he landed his role at Sama.Social initiatives, training data, and what attracted Jerome to Sama.The shift from focusing on AI models to the importance of data quality.Why academia requires the use of a foundational dataset to compare models.The reason for the early focus on building new AI models.Whether datasets will become open source in the future as models have.The role of annotation in making data meaningful and useful.Challenges of annotating data and different approaches to doing so.The three components of data annotation: models, filtering, and the annotation pipeline.How to hone in on goals for filtering data into valuable subsets that align with your desired outcomes.How to measure a model’s accuracy by focusing on user experience and more.What data drift is and how to prevent it by keeping track of it and retraining models where necessary.How to know that your training data is close enough to your production data.What excites Jerome most about the world of data and annotation.Tweetables:“Most of the successful model architectures are now open source. You can get them anywhere on the web easily, but the one thing that a company is guarding with its life is its data.” — Jerome Pasquero [0:05:36]“If you consider that we now know that a model can be highly sensitive to the quality of the data that are used to train it, there is this natural shift to try to feed models with the best data possible and data quality becomes of paramount importance.” — Jerome Pasquero [0:05:47]“The point of this whole system is that, once you have these three components in place, you can drive your filtering strategy.” — Jerome Pasquero [0:14:06]“You can always get more data later. What you want to avoid is getting yourself into a situation where the data that you are annotating is useless.” — Jerome Pasquero [0:17:30]“A model is like a living thing. You need to take care of it otherwise it is going to degrade, not because it’s degrading internally, but because the data that it is used to seeing has changed.” — Jerome Pasquero [0:25:49]Links Mentioned in Today’s Episode:Jerome Pasquero on LinkedInJerome Pasquero Blog: Top 10 Data Labeling FAQsSama
undefined
Mar 3, 2022 • 41min

A Highly Compositional Future with Dr. Eric Daimler

Dr. Daimler is an authority in Artificial Intelligence with over 20 years of experience in the field as an entrepreneur, executive, investor, technologist, and policy advisor. He is also the founder of data integration firm Conexus, and we kick our conversation off with the work he is doing to integrate large heterogeneous data infrastructures. This leads us into an exploration of the concept of compositionality, a structural feature that enables systems to scale, which Dr. Daimler argues is the future of IT infrastructure. We discuss how the way we apply AI to data is constantly changing, with data sources growing quadratically, and how this necessitates an understanding of newer forms of math such as category theory by AI specialists. Towards the end of our discussion, we move on to the subject of the adoption of AI in technologies that lives depend on, and Dr. Daimler gives his recommendation for how to engender trust amongst the larger population. Key Points From This Episode:Experience Dr. Daimler has in AI in an academic, commercial, and governmental capacity.An issue in the choices being made around how to create data that is useful in large organizations.Dr. Daimler’s work bringing heterogeneous data together to influence better business decisions.How much money is wasted on ETL processes and how bad the jobs in that field are.The difference between modularity and compositionality and why the latter is the future of IT infrastructure.How compositionality enables scalability and the need of certain branches of math to justify it.The work Dr. Daimler is doing in the field of compositionality at Connexus.Whether it is crucial to grasp these newer forms of math to achieve AI mastery.How AI systems can integrate into contexts involving human labor and empathy.The need to bring together probabilistic and deterministic AI in life and death contexts.How to get the public to trust and believe in AI-powered tech with the capacity to save lives.What AI practitioners can do to ensure they use their skillset to create a better future.Tweetables:“You can create data that doesn’t add more fidelity to the knowledge you’re looking to gain for better business decisions and that is one of the limitations that I saw expressed in the government and other large organizations.” — @ead [0:01:32]“That’s the world, is compositionality. That is where we are going and the math that supports that, type theory, categorical theory, categorical logic, that’s going to sweep away everything underneath IT infrastructure.” — @ead [0:10:23]“At the trillions of data, a trillion data sources, each growing quadratically, what we need is category theory.” — @ead [0:13:51]“People die and the way to solve that problem when you are talking about these life and death contexts for commercial airplane manufacturers or in energy exploration where the consequences of failure can be disastrous is to bring together the sensibilities of probabilistic AI and deterministic AI.” — @ead [0:24:07]“Circuit breakers, oversight, and data lineage, those are three ways that I would institute a regulatory regime around AI and algorithms that will engender trust amongst the larger population.” — @ead [0:35:12]Links Mentioned in Today’s Episode:Dr. Eric Daimler on LinkedInDr. Eric Daimler on TwitterConexus
undefined
Feb 24, 2022 • 31min

Transfer Learning & Solving Unstructured Data with Indico Data CTO Slater Victoroff

Irrespective of the application or the technology, a common problem among AI professionals always seems to be data. Is there enough of it? What do we prioritize? Is it clean? How do we annotate it? Today’s guest, however, believes that AI is not data-limited but compute-limited. Joining us to share some very interesting insights on the subject matter is Slater Victoroff, Founder and Chief Technology Officer at Indico, an unstructured data platform that enables users to build innovative, mission-critical enterprise workflows that maximize opportunity, reduce risk, and accelerate revenue. Slater explains how he came to co-found Indico Data despite a previous admission that he believed that deep learning was dead. He explains what happened that unlocked deep learning, how he was influenced by the AlexNet paper, and how Indico goes about solving the problem of unstructured data.  Key Points From This Episode:How Slater Victoroff came to found Indico and how he came to understand the value of deep learning.How Indico’s approach has evolved over time. What happened that unlocked deep learning and what inspired Slater to incorrectly believe it was over before it began.The event of the AlexNet paper published in 2012 and its influence on deep learning. Insight into the application of deep learning at Indico and their focus on human-machine interaction.What is meant by “solving the problem of unstructured data”.How Indico is reducing the price of building unstructured use cases.Thoughts on whether or not the downsizing of investment and hardware requirements of AI technology is a necessary outcome. The surprisingly low percentage of projects that succeed. Why Slater believes that AI today is not data-limited but compute-limited. Why resolving compute won’t remove the need for clean annotated data.Whether or not determining which data to prioritize is still a computer vision problem. How the refocus into transfer learning has affected Indico’s approach. What Slater is really excited about in the short and medium-term future of AI.Tweetables:“Deep learning is particularly useful for these sorts of unstructured use-cases, image, text, audio. And it’s an incredibly powerful tool that allows us to attack these use cases in a way that we fundamentally weren’t able to otherwise.” — @sl8rv [0:02:44]“By and large, AI today is not data-limited, it is compute limited. It is the only field in software that you can say that.” — @sl8rv [0:19:27]“That’s really this next frontier though: This is where transfer learning is going next, this idea ‘Can I take visual information and language information? Can I understand that together in a comprehensive way, and then give you one interface to learn on top of that consolidated understanding of the world?’” — @sl8rv [0:26:05]“We have gone from asking the question ‘Is transfer learning possible?’ to asking the question ‘What does it take to be the best in the world at transfer learning?’”. — @sl8rv [0:27:03]Links Mentioned in Today’s Episode:"Visualizing and Understanding Convolutional Networks"Slater VictoroffSlater Victoroff on TwitterIndico Data 
undefined
Feb 17, 2022 • 29min

AI Opportunity in the Space Ecosystem with Space Foundation COO Shelli Brunswick

The innovations that drive space exploration not only aid us in discovering other worlds, but they also benefit us right here on earth. Today’s guest is Shelli Brunswick, who joins us to talk about the role of AI in space exploration and how the ‘space ecosystem’ can create jobs and career opportunities on Earth. Shelli is the COO at the Space Foundation and was selected as the 2020 Diversity and Inclusion Officer and Role Model of the Year by WomenTech Network and a Woman of Influence by the Colorado Springs Business Journal. We kick our discussion off by hearing how Shelli got to her current role and what it entails. She talks about how connected the space industry has become to many others, and how this amounts to a ‘space ecosystem’, a rich field for opportunity, innovation, and commerce. We talk about the many innovations that have stemmed from space exploration, the role they play on this planet, and the possibilities this holds as the space ecosystem continues to grow. She gets into the programs at the Space Foundation to encourage entrepreneurship and the ways that innovators can direct their efforts to participate in the space ecosystem. We also explore the many ways that AI plays a role in the space ecosystem and how the AI being utilized across industries on earth will find later applications in space. Tune in today to learn more!Key Points From This Episode:Shelli’s background and career journey to her role as COO at the Space Foundation.What made Shelli decide that she wanted to occupy her current role.Using space innovation to benefit us on earth; the definition of ‘space ecosystem’.The many industries and countries that participate in the space ecosystem.Using the Center for Innovation and Education to create diversity and opportunities across industries.The role of AI in the growing space ecosystem; satellites, space exploration, GPS, and more.How AI regulates hydroponic agriculture on earth and can in space too.The many challenges of living off-world and the role AI will play.How entrepreneurship in the context of the space ecosystem is taught at the Space Commerce Institute.The spinoff commercialization and innovations that come from the space industry.How AI practitioners can point their expertise and technology into the space ecosystem.What AI will change in the world of the future and the effects of this on jobs.Tweetables:“What we really need to do is wrap it back to how that space technology, that space innovation, that investing in space, benefits us right here on planet earth and creates jobs and career opportunities.” — @shellibrunswick [0:05:52]“The sky is not the limit [for the role that] AI can play in this.” — @shellibrunswick [0:12:12]“It is the Wild West. It is exciting and, if you want to be an entrepreneur, buckle in because there is an opportunity for you!” — @shellibrunswick [0:20:36]“You can sit in the Space Symposium sessions and hear what are those governments investing in, what are those companies investing in, and how can you as an entrepreneur create a product or service that’s related to AI that helps them fill that capability gap?” — @shellibrunswick [0:22:00]Links Mentioned in Today’s Episode:Shelli Brunswick on LinkedInShelli Brunswick on TwitterThe Space FoundationThe Center for Innovation and EducationSpace Symposium
undefined
Feb 10, 2022 • 34min

Autonomous Vehicles' Impact on Cities with Lyft's Sarah Barnes

A future filled with autonomous vehicles promises to be a driving utopia. Maximum efficiency navigation decreasing traffic and congestion, safety features that drastically reduce collisions with other cars, bikes, or pedestrians, and an electric-first approach that lowers greenhouse gas emissions. But as today’s guest asserts, on the back of her extensive research the implications of a huge increase in autonomous vehicles on our streets aren’t rosy by default. Sarah Barnes works on the micro-mobility team at Lyft, and has published a variety of works that document the expected implications of more autonomous vehicles in major metropolitan areas— implications that are good, bad, and ugly. Sarah argues that without a serious focus on three transport revolutions—making transport shared, electric, AND autonomous, congestion and pollution could be here to stay. Sarah walks me through what the various implications are, and how local governments and AI practitioners can partner on policy and technology to create a future that works for everyone. 

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode