How AI Happens cover image

How AI Happens

Latest episodes

undefined
Jul 7, 2022 • 26min

Calibrate Ventures Partners Jason Schoettler, Kevin Dunlap

 In today’s conversation, we learn about Jason and Kevin’s career backgrounds, the potential that the deep technology sector has, what ideas excite them the most, the challenges when investing in AI-based companies, what kind of technology is easily understood by the consumer, what makes a technological innovation successful, and much more. Key Points From This Episode:Background and professional paths that led Jason and Kevin to their current roles.Reasons behind starting Calibrate Ventures and originally entering the sector.How the deep-technology sector can solve current problems.What kind of new technology and innovation they get most excited about.The most essential quality of innovative technology: what people want.Rundown of the diverse, experienced, and talented team they work with.Jason shares an example of a technological innovation that solved a real-world problem.How they differentiate the approach when investing in AI companies.What influences the longer sale cycles in the AI technology sector.An example of the challenges when integrating AI technology with the business side.The benefits that Jason and Kevin’s experience adds to the company.Some examples of the kind of technology that translates well to the consumer.We find out what their opinion is about automation and augmentation.We wrap up the show with some advice from Jason and Kevin for AI entrepreneurs. Tweetables:“I think for me personally, the cycle-time was very long. You work on projects for a very long time. As an investor, I get to see new ideas and new concepts every day. From an intellectual curiosity standpoint, there couldn’t be a better job.” — Kevin Dunlap [0:05:17]“So that lights me up. When I hear somebody talk about a problem that they are looking to solve and how their technology can do it uniquely with some type of competitive or differentiated advantage we think is sustainable.” — Jason Schoettler [0:08:14]“The things that really excite us are not, where can we do better than humans but first, where are there not humans work right now where we need humans doing work.” — Jason Schoettler [0:20:44]“Anytime that someone is doing a job that is dangerous, that is able to be solved with technology, I think we owe it to ourselves to do that.” — Kevin Dunlap [0:22:39]Links Mentioned in Today’s Episode:Jason Schoettler on LinkedInKevin Dunlap on LinkedInCalibrate VenturesCalibrate Ventures on LinkedInGrayMatter RoboticsGrayMatter Robotics on LinkedIn
undefined
Jun 23, 2022 • 37min

x.AI Founder Dennis Mortensen

Whether you’re building AI for self-driving cars or for scheduling meetings, it’s all about prediction! In this episode, we’re going to explore the complexity of teaching the human power of prediction to machines.Key Points From This Episode:Dennis shares an overview of what he has spent his career focusing on.How Dennis defines an intelligent agent.The role of prediction in the AI space.Dennis explains the mission that drove his most recent entrepreneurial venture, x.ai (acquired by Bizzabo).Challenges of transferring humans’ capacity for prediction and empathy to machines.Some of Dennis’s key learnings from his time working on the technology for x.ai.Unrealistic expectations that humans have of machines.How we can teach humans to have empathy for machines. Dennis’s hope for the next generation in terms of their approach to AI.A lesson Dennis learned from his daughter about AI and about human nature.What Dennis is most excited about in the AI space.Tweetables:“The whole umbrella of AI is really just one big prediction engine.” — @DennisMortensen [0:03:38]“Language is not a solved science.” — @DennisMortensen [0:06:32]“The expectation of a machine response is different to that of a human response to the same question.” — @DennisMortensen [0:11:36]Links Mentioned in Today’s Episode:Dennis Mortensen on LinkedInBizzabo [Formerly x.ai]
undefined
Jun 16, 2022 • 29min

Unity SVP of AI Danny Lange: The Industrial Metaverse

Leading AI companies are adopting simulation, synthetic data and other aspects of the metaverse at an incredibly fast rate, and the opportunities for AI/machine learning practitioners are endless. Tune in today for a fascinating conversation about how the real world and the virtual world can be blended in what Danny refers to as “the real metaverse.”Key Points From This Episode:The career path that led Danny to his current role as Senior Vice President of AI at Unity.Data; the machine learning challenge that Danny has dealt with in many different forms throughout his career. An explanation of how Unity uses data to make game recommendations to players. How deep learning embedding works.What drew Danny to Unity.The benefits of using synthetic data.How Unity ensures that the synthetic data they create is as unbiased as possible. The importance of anchoring your synthetic data to a real-world counterpart.Danny’s thoughts on the potential of the Metaverse.Examples of the career opportunities that the Metaverse has opened up for AI/machine learning practitioners.Tweetables:“When you play a game, I don’t need to know your name, your age. I don’t need to know where you live, or how much you earn. All that really matters is that my system needs to learn the way you play and what you are interested in in your gameplay, to make excellent recommendations for other games. That’s what drives the gaming ecosystem.” — @danny_lange [0:03:16]“Deep learning embedding is something that is really driving a lot of progress right now in the machine learning AI space.” — @danny_lange [0:06:04]“The world is built on uncertainty and we are looking at simulation in an uncertain world, rather than in a Newtonian, deterministic world.” — @danny_lange [0:23:23]Links Mentioned in Today’s Episode:Danny Lange on LinkedInUnity
undefined
Jun 9, 2022 • 27min

CarbonChain Head of Data & Machine Learning Archy De Berker

In today’s episode, Archy De Berker, Head of Data and Machine learning at CarbonChain, explains how he and his team calculate carbon footprints, some of the challenges that they face in this line of work, the most valuable use of machine learning in their business (and for climate change solutions as a whole), and some important lessons that he has learned throughout his diverse career so far! Key Points From This Episode:An overview of Archy’s career trajectory, from academic neuroscientist to Head of Data and Machine learning at CarbonChain.The foundational mission of CarbonChain.Archy explains how machine learning can be applied in the context of energy storage as a climate change solution. Industries that CarbonChain focuses on.How Archy and his team calculate carbon footprints.A key challenge for carbon footprinting.Where machine learning provides the most value for CarbonChain.The importance of the field of document understanding. A story from Archy’s time at Element AI that highlights the value of having technical people working as close to the design and data generation as possible. Why Archy chose to move into the product management realm.Additional ways that machine learning can help solve climate change issues. Tweetables:“We build automated carbon footprinting for the world’s most polluting industries. We’re really trying to help people who are buying things from carbon-intense industries figure out where they can get lower carbon versions of the same kind of products.” — @ArchydeB [0:02:14]“A key challenge for carbon footprinting is that you need to be able to understand somebody’s business in order to tell them what the carbon footprint of their activities is.” — @ArchydeB [0:13:01]“Probably the most valuable place for machine learning in our business is taking all this heterogeneous customer data from all these different systems and being able to map it onto a very rigid format that we can then retrieve information from our databases for.” — @ArchydeB [0:13:24]Links Mentioned in Today’s Episode:Archy de Berker on LinkedInCarbon Chain
undefined
Jun 3, 2022 • 34min

Privacy in AI with MATR Ventures Partner Hessie Jones

MATR Ventures Partner, Hessie Jones, is dedicated to solving issues around AI ethics as well as diversity & representation in the space. In our conversation with her, she breaks down how she came to beleive something was wrong with the way companies harvest & use data, and the steps she has taken towards solving the privacy problem. We discuss the danger of intentionally convoluted terms and conditions and the problem with synthetic data. Tune in to hear about the future of biometrics and data privacy and the emerging technologies using data to increase accountability.Key Points From This Episode:Hessie Jones’ background: from marketing to her current role at MATR Ventures.Hessie’s focus on AI ethics and privacy, and diversity in the venture capital space.Her mission to provide equal access to programs and investment.What inspired her to tackle the problem of AI ethics and privacy.The consequences of Snowden and the responsibility of tech to enforce customer privacy.Hessie’s path of seeking the solution to the privacy problem.The danger of blanketed terms and conditions.The problem with synthetic data.Crass uses of facial recognition.Emerging technologies using data to increase accountability.The future of biometrics and data privacy.The mission of MATR Ventures and who they invest in.Examples of technologies MATR Ventures employs to ensure accountability.Tweetables:“Venture capital is not immune to the diversity problems that we see today.” — Hessie Jones [0:05:04]“We should separate who you are as an individual from who you are as a business customer.” — Hessie Jones [0:08:49]“The problem I see with synthetic data is the rise of deep fakes.” — Hessie Jones [0:21:24]“The future is really about data that’s not shared, or if it’s shared, it’s shared in a way that increases accountability.” — Hessie Jones [0:26:43]Links Mentioned in Today’s Episode:Hessie Jones on LinkedInMATR VenturesResponsible AI
undefined
May 26, 2022 • 27min

Qualcomm Head of AI & ML Product Management Dr. Vinesh Sukumar

During Vinesh Sukumar’s colorful career he has worked at NASA, Apple, Intel, and a variety of other companies, before finding his way to Qualcomm where he is currently the Head of AI/ML Product Management. In today’s conversation, Vinesh shares his experience of developing the camera for the very first iPhone and one of the biggest lessons he learned from working with Steve Jobs. We then discuss what his current role entails and the biggest challenge that he has with it, Qualcomm’s approach to sustainability from a hardware, systems and software standpoint, and his thoughts on why edge computing is so important.Key Points From This Episode:An overview of Vinesh’s career trajectory, including his experiences at NASA, Apple, and Intel.The focal area of Vinesh’s PhD.  Challenges that Vinesh faced while working on cutting edge technology for camera phones.Some of the early AI applications that were used in smartphone cameras. The most important factors to consider when developing cameras for phones.Valuable lessons that Vinesh learned from working with Steve Jobs.What Vinesh’s role as Head of AI/ML Product Management at Qualcomm consists of.Why optimization is one of the biggest technical challenges that Vinesh faces in his role at Qualcomm. The four buckets of MLOps. Vinesh explains why edge computing is so important. Benefits of building intelligence into devices rather than requiring a connection to the cloud.Qualcomm’s approach to scalability. Why Vinesh is excited about cognitive AI.Tweetables:“Camera became one of the most important features for a consumer to buy a phone. Then visual analytics, AI, deep learning, ML really started seeping into images, and then into videos, and now the most important consumer influencing factor to buy a phone is the camera.” — Vinesh Sukumar [0:07:01]“Reaction time is much better when you have intelligence on the device, rather than giving it to the cloud to make the decision for you.” — Vinesh Sukumar [0:20:48]Links Mentioned in Today’s Episode:Vinesh Sukumar on LinkedInQualcomm
undefined
May 19, 2022 • 36min

AI in the Metaverse with Dr. Mark Rijmenam

Joining us on this episode of How AI Happens is four-time author, entrepreneur, future tech strategist, and The Digital Speaker himself, Dr. Mark van Rijmenam. Mark explainsthe extraordinary opportunities and challenges facing business leaders, consumers, regulators, policymakers, and other metaverse stakeholders trying to navigate the future of the internet; the important role that AI will play in the metaverse; why he believes we need to enable what he calls ‘anonymous accountability’; and how you can actively participate in building ethical AI. Key Points From This Episode:Meet Dr. Mark van Rijmenam and gain some insight into his trajectory thus far.The role that AI and the blockchain played in Mark’s book, The Organisation of Tomorrow.What we can learn about feedback loops from the failures of Microsoft’s Tay chatbot.At what point technology shifts from a tool employed by practitioners to autonomous agent.Distinguishing between artificial general intelligence (AGI) and Super AI.Mark responds to those who believe we will never reach Super AI; it’s inevitable!The advent of the metaverse and why Mark believes it will unlock a trillion-dollar social economy, as per his book, Step Into the Metaverse.How Web 3.0 will allow us to reclaim control of our data, digital assets, and identity; moving from value extraction to value creation.Understanding the difference between the metaverse and Web 3.0 without conflating the two.How Mark sees AI participating in the metaverse and the role it will play in this ‘new world’.The dangers that come with the uncanny ‘deep fakes’ of the future.Our responsibility to properly verify the digital information we consume and how AI can help.What Mark means when he says we need to enable ‘anonymous accountability’.How to take advantage of the career opportunities of Web 3.0 and the metaverse and how you can contribute to building ethical AI.Tweetables:“The social and the material [systems are] very good but, for the organizations of tomorrow, we need to add a third actor, which is the artificial.” — @VanRijmenam [0:03:05]“Once we reach AGI, that will be a fundamental shift because, once we have AGI—which is as intelligent as a human being, but at an exponential speed—everything will change.” — @VanRijmenam [0:08:34]“How can we create a metaverse that doesn’t continue on the path of the internet of today? We have this blank canvas where we can construct this immersive internet in ways where we do own our data, [digital assets, identity, and reputation] using a self-sovereign approach.” — @VanRijmenam [0:15:09]“Technology is neutral. My objective is to help people move to the positive side of technology.” — @VanRijmenam [0:29:24]Links Mentioned in Today’s Episode:Dr. Mark van Rijmenam on LinkedInDr. Mark van Rijmenam on TwitterThe Digital SpeakerDatafloqBetween Two Bots PodcastStep Into the MetaverseThe Organisation of Tomorrow‘The Matrix Awakens: An Unreal Engine 5 Experience’
undefined
May 12, 2022 • 31min

IBM Master Inventor & AI Advisor to the UN Neil Sahota

Neil Sahota is an AI Advisor to the UN, co-founder of the UN’s AI for Good initiative, IBM Master Inventor, and author of Own the AI Revolution. In today’s episode, Neil shares some of the valuable lessons he learned during his first experience working in the AI world, which involved training the Watson computer system. We then dive into a number of different topics, ranging from Neil’s thoughts on synthetic data and to the language learning capacity of AI versus a human child, to an overview of the AI for Good initiative and what Neil believes our a “cyborg future” could entail! Key Points From This Episode:A few of the thousands of data points that humans use to make rapid judgments.Neil’s introduction into the world of AI.How data collection changed AI, using the Watson computer system as an example. Lessons that Neil learned through training Watson.The relative importance of confidence levels with regard to training AI in different fields.Why reaching a 99.9% confidence level is not realistic.Examples of cases where synthetic data is and isn’t helpful.A major difference between the language learning trajectory of AI versus a human child.Areas that Neil believes AI is best suited for.Focus of the United Nations’ AI for Good initiative.The UN’s approach to bringing AI technologies to remote parts of the world.Benefits of being exposed to technology at a young age.The cyborg future: what Neil believes this is going to look like.Why Neil is excited about AI augmentation for human creativity. Tweetables:“We, as human beings, have to make really rapid judgement calls, especially in sports, but there’s still thousands of data points in play and the best of us can only see seven to 12 in real time.” — @neil_sahota [0:01:21]“Synthetic data can be a good bridge if we’re in a very closed ecosystem.” — @neil_sahota [0:11:47]“For an AI system, if it gets exposed to about 100 billion words it becomes proficient and fluent in a language. If you think about a human child, it only needs about 30 billion words. So, it’s not the volume that matters, there’s certain words or phrases that trigger the cognitive learning for language. The problem is that we just don’t understand what that is.” — @neil_sahota [0:14:22]“Things that are more hard science, or things that have the least amount of variability, are the best things for AI systems.” — @neil_sahota [0:16:26]“Local problems have global solutions.” — @neil_sahota [0:20:06]Links Mentioned in Today’s Episode:Neil SahotaNeil Sahota on LinkedInOwn the A.I. RevolutionAI for Good
undefined
May 5, 2022 • 22min

Prospitalia Group CEO Dr. Marcell Vollmer

Dr. Marcell Vollmer is the CEO of Prospitalia Group, formerly Chief Innovation Officer at Celonis and Chief Digital Officer at SAP. He joins to discuss Machine Learning advances in MedTech and how practitioners can be thoughtful about when it is appropriate to deploy ML.
undefined
Apr 28, 2022 • 25min

AI Safety Engineering - Dr. Roman Yampolskiy

 Today’s guest has committed many years of his life to trying to understand Artificial Superintelligence and the security concerns associated with it.  Dr. Roman Yampolskiy is a computer scientist (with a Ph.D. in behavioral biometrics), and an Associate Professor at the University of Louisville. He is also the author of the book Artificial Superintelligence: A Futuristic Approach. Today he joins us to discuss AI safety engineering. You’ll hear about some of the safety problems  he has discovered in his 10 years of research, his thoughts on accountability and ownership when AI fails, and whether he believes it’s possible to enact any real safety measures in light of the decentralization and commoditization of processing power. You’ll discover some of the near-term risks of not prioritizing safety engineering in AI, how to make sure you’re developing it in a safe capacity, and what organizations are deploying it in a way that Dr. Yampolskiy believes to be above board. Key Points From This Episode:An introduction to Dr. Roman Yampolskiy, his education, and how he ended up in his current role. Insight into Dr. Yampolskiy’s Ph.D. dissertation in behavioral biometrics and what he learned from it. A definition of AI safety engineering.The two subcomponents of AI safety: systems we already have and future AI.Thoughts on whether or not there is a greater need for guardrails in AI than other forms of technology.Some of the safety problems that Dr. Yampolskiy has discovered in his 10 years of research.Dr. Yampolskiy’s thoughts on the need for some type of AI security governing body or oversight board.Whether it’s possible to enact any sort of safety in light of the decentralization and commoditization of processing power.Solvable problem areas. Trying to negotiate the tradeoff between enabling AI to have creative freedom and being able to control it.Thoughts on whether or not there will be a time where we will have to decide whether or not to go past the point of no return in terms of AI superintelligence.Some of the near-term risks of not prioritizing safety engineering in AI.What led Dr. Yampolskiy to focus on this area of AI expertise.How to make sure you’re developing AI safely.Thoughts on accountability and ownership when AI fails, and the legal implications of this.Other problems Dr. Yampolskiy has uncovered. Thoughts on the need for a greater understanding of the implications of AI work and whether or not this is a conceivable solution.Use cases or organizations that are deploying AI in a way that Dr. Yampolskiy believes to be above board.Questions that Dr. Yampolskiy would be asking if he was on an AI development safety team.How you can measure progress in safety work. Tweetables:“Long term, we want to make sure that we don’t create something which is more capable than us and completely out of control.” — @romanyam [0:04:27]“This is the tradeoff we’re facing: Either [AI] is going to be very capable, independent, and creative, or we can control it.” — @romanyam [0:12:11]“Maybe there are problems that we really need Superintelligence [to solve]. In that case, we have to give it more creative freedom but with that comes the danger of it making decisions that we will not like.” — @romanyam [0:12:31]“The more capable the system is, the more it is deployed, the more damage it can cause.” — @romanyam [0:14:55]“It seems like it’s the most important problem, it’s the meta-solution to all the other problems. If you can make friendly well-controlled superintelligence, everything else is trivial. It will solve it for you.” — @romanyam [0:15:26]Links Mentioned in Today’s Episode:Dr. Roman YampolskiyArtificial Superintelligence: A Futuristic ApproachDr. Roman Yampolskiy on Twitter

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode