How AI Happens cover image

How AI Happens

Latest episodes

undefined
May 26, 2022 • 27min

Qualcomm Head of AI & ML Product Management Dr. Vinesh Sukumar

During Vinesh Sukumar’s colorful career he has worked at NASA, Apple, Intel, and a variety of other companies, before finding his way to Qualcomm where he is currently the Head of AI/ML Product Management. In today’s conversation, Vinesh shares his experience of developing the camera for the very first iPhone and one of the biggest lessons he learned from working with Steve Jobs. We then discuss what his current role entails and the biggest challenge that he has with it, Qualcomm’s approach to sustainability from a hardware, systems and software standpoint, and his thoughts on why edge computing is so important.Key Points From This Episode:An overview of Vinesh’s career trajectory, including his experiences at NASA, Apple, and Intel.The focal area of Vinesh’s PhD.  Challenges that Vinesh faced while working on cutting edge technology for camera phones.Some of the early AI applications that were used in smartphone cameras. The most important factors to consider when developing cameras for phones.Valuable lessons that Vinesh learned from working with Steve Jobs.What Vinesh’s role as Head of AI/ML Product Management at Qualcomm consists of.Why optimization is one of the biggest technical challenges that Vinesh faces in his role at Qualcomm. The four buckets of MLOps. Vinesh explains why edge computing is so important. Benefits of building intelligence into devices rather than requiring a connection to the cloud.Qualcomm’s approach to scalability. Why Vinesh is excited about cognitive AI.Tweetables:“Camera became one of the most important features for a consumer to buy a phone. Then visual analytics, AI, deep learning, ML really started seeping into images, and then into videos, and now the most important consumer influencing factor to buy a phone is the camera.” — Vinesh Sukumar [0:07:01]“Reaction time is much better when you have intelligence on the device, rather than giving it to the cloud to make the decision for you.” — Vinesh Sukumar [0:20:48]Links Mentioned in Today’s Episode:Vinesh Sukumar on LinkedInQualcomm
undefined
May 19, 2022 • 36min

AI in the Metaverse with Dr. Mark Rijmenam

Joining us on this episode of How AI Happens is four-time author, entrepreneur, future tech strategist, and The Digital Speaker himself, Dr. Mark van Rijmenam. Mark explainsthe extraordinary opportunities and challenges facing business leaders, consumers, regulators, policymakers, and other metaverse stakeholders trying to navigate the future of the internet; the important role that AI will play in the metaverse; why he believes we need to enable what he calls ‘anonymous accountability’; and how you can actively participate in building ethical AI. Key Points From This Episode:Meet Dr. Mark van Rijmenam and gain some insight into his trajectory thus far.The role that AI and the blockchain played in Mark’s book, The Organisation of Tomorrow.What we can learn about feedback loops from the failures of Microsoft’s Tay chatbot.At what point technology shifts from a tool employed by practitioners to autonomous agent.Distinguishing between artificial general intelligence (AGI) and Super AI.Mark responds to those who believe we will never reach Super AI; it’s inevitable!The advent of the metaverse and why Mark believes it will unlock a trillion-dollar social economy, as per his book, Step Into the Metaverse.How Web 3.0 will allow us to reclaim control of our data, digital assets, and identity; moving from value extraction to value creation.Understanding the difference between the metaverse and Web 3.0 without conflating the two.How Mark sees AI participating in the metaverse and the role it will play in this ‘new world’.The dangers that come with the uncanny ‘deep fakes’ of the future.Our responsibility to properly verify the digital information we consume and how AI can help.What Mark means when he says we need to enable ‘anonymous accountability’.How to take advantage of the career opportunities of Web 3.0 and the metaverse and how you can contribute to building ethical AI.Tweetables:“The social and the material [systems are] very good but, for the organizations of tomorrow, we need to add a third actor, which is the artificial.” — @VanRijmenam [0:03:05]“Once we reach AGI, that will be a fundamental shift because, once we have AGI—which is as intelligent as a human being, but at an exponential speed—everything will change.” — @VanRijmenam [0:08:34]“How can we create a metaverse that doesn’t continue on the path of the internet of today? We have this blank canvas where we can construct this immersive internet in ways where we do own our data, [digital assets, identity, and reputation] using a self-sovereign approach.” — @VanRijmenam [0:15:09]“Technology is neutral. My objective is to help people move to the positive side of technology.” — @VanRijmenam [0:29:24]Links Mentioned in Today’s Episode:Dr. Mark van Rijmenam on LinkedInDr. Mark van Rijmenam on TwitterThe Digital SpeakerDatafloqBetween Two Bots PodcastStep Into the MetaverseThe Organisation of Tomorrow‘The Matrix Awakens: An Unreal Engine 5 Experience’
undefined
May 12, 2022 • 31min

IBM Master Inventor & AI Advisor to the UN Neil Sahota

Neil Sahota is an AI Advisor to the UN, co-founder of the UN’s AI for Good initiative, IBM Master Inventor, and author of Own the AI Revolution. In today’s episode, Neil shares some of the valuable lessons he learned during his first experience working in the AI world, which involved training the Watson computer system. We then dive into a number of different topics, ranging from Neil’s thoughts on synthetic data and to the language learning capacity of AI versus a human child, to an overview of the AI for Good initiative and what Neil believes our a “cyborg future” could entail! Key Points From This Episode:A few of the thousands of data points that humans use to make rapid judgments.Neil’s introduction into the world of AI.How data collection changed AI, using the Watson computer system as an example. Lessons that Neil learned through training Watson.The relative importance of confidence levels with regard to training AI in different fields.Why reaching a 99.9% confidence level is not realistic.Examples of cases where synthetic data is and isn’t helpful.A major difference between the language learning trajectory of AI versus a human child.Areas that Neil believes AI is best suited for.Focus of the United Nations’ AI for Good initiative.The UN’s approach to bringing AI technologies to remote parts of the world.Benefits of being exposed to technology at a young age.The cyborg future: what Neil believes this is going to look like.Why Neil is excited about AI augmentation for human creativity. Tweetables:“We, as human beings, have to make really rapid judgement calls, especially in sports, but there’s still thousands of data points in play and the best of us can only see seven to 12 in real time.” — @neil_sahota [0:01:21]“Synthetic data can be a good bridge if we’re in a very closed ecosystem.” — @neil_sahota [0:11:47]“For an AI system, if it gets exposed to about 100 billion words it becomes proficient and fluent in a language. If you think about a human child, it only needs about 30 billion words. So, it’s not the volume that matters, there’s certain words or phrases that trigger the cognitive learning for language. The problem is that we just don’t understand what that is.” — @neil_sahota [0:14:22]“Things that are more hard science, or things that have the least amount of variability, are the best things for AI systems.” — @neil_sahota [0:16:26]“Local problems have global solutions.” — @neil_sahota [0:20:06]Links Mentioned in Today’s Episode:Neil SahotaNeil Sahota on LinkedInOwn the A.I. RevolutionAI for Good
undefined
May 5, 2022 • 22min

Prospitalia Group CEO Dr. Marcell Vollmer

Dr. Marcell Vollmer is the CEO of Prospitalia Group, formerly Chief Innovation Officer at Celonis and Chief Digital Officer at SAP. He joins to discuss Machine Learning advances in MedTech and how practitioners can be thoughtful about when it is appropriate to deploy ML.
undefined
Apr 28, 2022 • 25min

AI Safety Engineering - Dr. Roman Yampolskiy

 Today’s guest has committed many years of his life to trying to understand Artificial Superintelligence and the security concerns associated with it.  Dr. Roman Yampolskiy is a computer scientist (with a Ph.D. in behavioral biometrics), and an Associate Professor at the University of Louisville. He is also the author of the book Artificial Superintelligence: A Futuristic Approach. Today he joins us to discuss AI safety engineering. You’ll hear about some of the safety problems  he has discovered in his 10 years of research, his thoughts on accountability and ownership when AI fails, and whether he believes it’s possible to enact any real safety measures in light of the decentralization and commoditization of processing power. You’ll discover some of the near-term risks of not prioritizing safety engineering in AI, how to make sure you’re developing it in a safe capacity, and what organizations are deploying it in a way that Dr. Yampolskiy believes to be above board. Key Points From This Episode:An introduction to Dr. Roman Yampolskiy, his education, and how he ended up in his current role. Insight into Dr. Yampolskiy’s Ph.D. dissertation in behavioral biometrics and what he learned from it. A definition of AI safety engineering.The two subcomponents of AI safety: systems we already have and future AI.Thoughts on whether or not there is a greater need for guardrails in AI than other forms of technology.Some of the safety problems that Dr. Yampolskiy has discovered in his 10 years of research.Dr. Yampolskiy’s thoughts on the need for some type of AI security governing body or oversight board.Whether it’s possible to enact any sort of safety in light of the decentralization and commoditization of processing power.Solvable problem areas. Trying to negotiate the tradeoff between enabling AI to have creative freedom and being able to control it.Thoughts on whether or not there will be a time where we will have to decide whether or not to go past the point of no return in terms of AI superintelligence.Some of the near-term risks of not prioritizing safety engineering in AI.What led Dr. Yampolskiy to focus on this area of AI expertise.How to make sure you’re developing AI safely.Thoughts on accountability and ownership when AI fails, and the legal implications of this.Other problems Dr. Yampolskiy has uncovered. Thoughts on the need for a greater understanding of the implications of AI work and whether or not this is a conceivable solution.Use cases or organizations that are deploying AI in a way that Dr. Yampolskiy believes to be above board.Questions that Dr. Yampolskiy would be asking if he was on an AI development safety team.How you can measure progress in safety work. Tweetables:“Long term, we want to make sure that we don’t create something which is more capable than us and completely out of control.” — @romanyam [0:04:27]“This is the tradeoff we’re facing: Either [AI] is going to be very capable, independent, and creative, or we can control it.” — @romanyam [0:12:11]“Maybe there are problems that we really need Superintelligence [to solve]. In that case, we have to give it more creative freedom but with that comes the danger of it making decisions that we will not like.” — @romanyam [0:12:31]“The more capable the system is, the more it is deployed, the more damage it can cause.” — @romanyam [0:14:55]“It seems like it’s the most important problem, it’s the meta-solution to all the other problems. If you can make friendly well-controlled superintelligence, everything else is trivial. It will solve it for you.” — @romanyam [0:15:26]Links Mentioned in Today’s Episode:Dr. Roman YampolskiyArtificial Superintelligence: A Futuristic ApproachDr. Roman Yampolskiy on Twitter
undefined
Apr 21, 2022 • 31min

GRID.ai's Lead AI Educator Sebastian Raschka

Joining us today on How AI Happens is Sebastian Raschka, Lead AI educator at GRID.ai and Assistant Professor of Statistics at the University of Wisconsin-Madison.  Sebastian fills us in on the coursework he’s creating in his role at GRID.ai, and we find out what can be attributed to the crossover of machine learning in academia and the private sector. We speculate on the pros and cons of the commodification of deep learning models and which machine learning framework is better: PyTorch or TensorFlow. Key Points From This Episode:Sebastian Raschka’s journey from the computation of biology to AI and machine learning.The focus of his current role as Lead AI educator at GRID.ai.The ideal applications and outcomes of the coursework Sebastian is developing.The crossover of machine learning in academia and the private sector; the theory versus the application.Deep learning versus machine learning and what constitutes a deep learning problem.The importance of sufficient data for deep learning to be effective.The applications of the BERT text model.The pros and cons of developing more accessible models.Why Sebastian set out to write Machine Learning with PyTorch and Scikit-Learn.The structure of the book, including theory and application.Why Sebastian prefers PyTorch over TensorFlow.What he finds most exciting in the current deep learning space.The emerging opportunities to use deep learning!Tweetables:“In academia, the focus is more on understanding how deep learning works… On the other hand, in the industry, there are [many] use cases of machine learning.” — @rasbt [0:10:10]“Often it is hard to formulate answers as a human to complex questions.” — @rasbt [0:12:53]“In my experience, deep learning can be very powerful but you need a lot of data to make it work well.” — @rasbt [0:14:06]“In [Machine Learning with PyTorch and Scikit-Learn], I tried to provide a resource that is a hybrid between more theoretical books and more applied books.” — @rasbt [0:23:21]“Why I like PyTorch is that it gives me the readability [and] flexibility to customize things.” — @rasbt [0:25:55]Links Mentioned in Today’s Episode:Sebastian RaschkaSebastian Raschka on TwitterGRID.aiMachine Learning with PyTorch and Scikit-Learn 
undefined
Apr 14, 2022 • 24min

Edge Computing & AI Readiness with Antonio Grasso

Antonio Grasso joins us to explain how he empowers some of the biggest companies in the world to use AI in a meaningful way and explains the two ways his company goes about this. You’ll hear about what Antonio believes is coming down the pipeline in terms of the Internet of Things, especially when it comes to edge computing, and why network traffic has become a huge concern. We discuss where edge computing begins and ends with regards to the difference between the device and its computational resources. In light of the fact that one can infer at the edge but not train at the edge, Antonio shares his views on why he disagrees that the ultimate goal should be to train at the edge. He also provides a helpful resource for AI practitioners to calculate an AI readiness index.Key Points From This Episode:How Antonio Grasso grew from a software engineer in the 80s to someone so influential in the world of AI today. How Antonio empowers some of the biggest companies to use AI in a meaningful way. The two faces of Antonio’s company Digital Business Innovation Srl. One of Antonio’s biggest achievements at IBM.How, as a technical expert, he helps people to understand what is possible with this technology and build trust. The challenges of maintaining the pace of innovation.What Antonio believes is coming down the pipeline in terms of Internet of Things especially when it comes to edge computing.Why network traffic has become a huge concern and the role of edge computing in this. What defines edge computing and the difference between the device and its computational resources.In light of the fact that one can infer at the edge but not train at the edge, Antonio’s views on whether or not the goal should be to train at the edge. Antonio’s advice to the AI practitioner out there on how you can calculate an AI readiness index. Tweetables:“‘Wow, this is really unbelievable! We can also create not [only] code software with direct explicit instruction, we can also [create] code software that learns from experience!’ That really [caught] me and I fell in love with this kind of technology.” — @antgrasso [0:03:13]“I started on Social Media to share my knowledge, my experience, because I think you must share what you see because everyone can benefit of it too.” — @antgrasso [0:03:39]“We need to shift to better understand what is the meaning of edge computing but we must divide the device itself from the computational resources that we put [there] to harness the power of computational power in proximity.” — @antgrasso [0:16:15]“I can not imagine training at the edge. — We can do it, yes, but my question is why?” — @antgrasso [0:20:50]Links Mentioned in Today’s Episode:Antonio Grasso  Digital Business Innovation SrlAI Singapore (AIRI Assessment)Antonio Grasso on TwitterHow AI Happens
undefined
Apr 7, 2022 • 38min

The Future of NLP with AI Professor, Vice Rector, & Researcher Aleksandra Przegalinska

Today's guest is Aleksandra Przegalinska PhD, Vice-Rector at Kozminski University, research associate, and Polish futurist. From studying pure philosophy, Aleksandra moved into AI when she started researching natural language processing in the virtual space. We kickstart our discussion with her account of how she ended up where she is now, and how she transferred her skills from philosophy to AI. We hear how Second Life was common in Asia centuries ago, why we are seeing a return to anonymization online, and why Aleksandra feels NLP should be called ‘natural language understanding’. We also discover what the real-world applications of NLP are, and why text processing is under-utilized. Moving onto more philosophical questions around AI and labor, Aleksandra explains how AI should be used to help people and why what is sometimes simple for a human can be immensely complex for AI. We wrap up with Aleksandra’s thoughts on transformers and why their applications are more important than their capabilities, as well as why she is so excited about the idea of xenobots. Key Points From This Episode:An introduction to today’s guest, Aleksandra Przegalinska, PhD.What Aleksandra is researching at the moment and how she ended up in academia. Insight into the link between her PhD topic and the Metaverse and the transfer of skills from a philosophy degree to AI. How a properly built digital ecosystem allows people freedom of expression and other takeaways from Aleksandra’s PhD experience. The return to online anonymization that we are currently seeing. Aleksandra’s experience of NLP in Second Life and how AI has altered the field. The role of NLP in Aleksandra’s work today and why she feels it should be called ‘natural language understanding’. The real-world applications of NLP: why text processing is under-utilized. Why people don't have to believe that programs are close to human.Why Aleksandra feels removing the need for manually annotated data should be a key focus in the field of AI.Tradeoffs between automation and human labor; why we should use AI to help humans first.How the challenges of automating tasks differ between fields from creative and marketing to calendar management. What Aleksandra thinks of the transformer arms race: why applications are more important than parameters. Why Aleksandra feels xenobots will change the world.Tweetables:“My major discovery [during my PhD] was that people are capable of building robust identities online and can live two lives. They can have their first life and then they can have their second life online, which can be very different from the one they pursue on-site, in the real world.” — @Przegaa [0:06:42]“We can all observe that there is a great boom in NLP. I’m not even sure we should call it NLP anymore. Maybe NLP is an improper phrase. Maybe it’s NLU: natural language understanding.” — @Przegaa [0:14:51]“Transformers seem to be a really big game-changer in the AI space.” — @Przegaa [0:16:40]“I think that using text as a resource for data analytics for businesses in the future is something that we will see happen in the coming two or three years.” — @Przegaa [0:19:46]“AI should not replace you, AI should help you at your work and make your work more effective but also more satisfying for you.” — @Przegaa [0:25:31]Links Mentioned in Today’s Episode:Aleksandra Przegalinska on LinkedInAlexandra Przegalinska on Twitter 
undefined
Mar 31, 2022 • 33min

Unpacking Facial Recognition Technology at CyberLink

CyberLink's facial recognition technology routinely registers best-in-class accuracy. But how do developers deal with masks, glasses, headphones, or changes in faces over time? How can they prevent spoofing in order to protect identities? And where does computer vision & object detection stop and FRT truly begin? CyberLink Senior Vice President of Global Marketing and US General Manager Richard Carriere and Head of Sales Engineering Craig Campbell join to discuss the endless use cases for facial recognition technology, how CyberLink is improving the tech's accuracy & security, and the ethical considerations of deploying FRT at scale.CyberLink's Ultimate Guide to Facial RecognitionFaceMe Security SDK DemoGet in touch with Cyberlink: FaceMe_US@cyberlink.com
undefined
Mar 24, 2022 • 33min

Universal Autonomy with Oxbotica VP of Technology Ben Upcroft

Oxbotica is a vehicle software company at the forefront of autonomous technology, and today we have a fascinating chat with Ben Upcroft, the Vice President of Technology. Ben explains Oxbotica's mission of enabling industries to make the most of autonomy, and how their technological progress affects real-world situations. We also get into some of the challenges that Oxbotica and the autonomy space, in general, are currently facing, before drilling down on the important concepts of user trust, future implementations, and creating an adaptable core functionality. The last part of today's episode is spent exploring the exciting possibilities of simulated environments for data collection, and the broadening of vehicle experience. Ben talks about the importance of seeking out edge cases to improve their data, and we get into how Oxbotica applies this data across locations. Key Points From This Episode:The constant joy and excitement that Ben feels about his work!An introduction to Oxbotica and its main mission as an organization. How the advances in autonomy translate into real-world progress in safety and efficiency. Handbrakes on the widespread implementation of more autonomy; Ben looks at current limitations. Facilitating trust in the public sphere for something new and the markers of progress. Oxbotica's array of vehicles and goals beyond basic transportation. Constant evolution and the question of staying on course with the rising tide of technology.  How generic features allow for an adaptable core functionality in Oxbotica's vehicles.  Applying data from different environments to boost performance across location types.How Oxbotica focuses on simulated edge cases as a means to broaden the capabilities of their technologies.The amount of real-world data that is necessary for accurate synthesis. Assessing the idea of quality over quantity when it comes to data for AI applications. The areas of the AI field that have Ben most excited right now; emulation of the human brain for the creation of new platforms! Tweetables:“Oxbotica is about deploying and enabling industries to use and leverage autonomy for performance, for efficiency, and safety gains.” — @ben_upcroft “The autonomy that we bring revolutionizes how we move around the globe, through logistics transport, on wheeled vehicles.” — @ben_upcroft “The idea behind the system is that it is modular, enables a core functionality, and I am able to add little extras that customize for a particular domain.” — @ben_upcroft Links Mentioned in Today’s Episode:Ben Upcroft on LinkedInOxboticaBen Upcroft on Twitter

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode