How AI Happens cover image

How AI Happens

Latest episodes

undefined
Mar 3, 2022 • 41min

A Highly Compositional Future with Dr. Eric Daimler

Dr. Daimler is an authority in Artificial Intelligence with over 20 years of experience in the field as an entrepreneur, executive, investor, technologist, and policy advisor. He is also the founder of data integration firm Conexus, and we kick our conversation off with the work he is doing to integrate large heterogeneous data infrastructures. This leads us into an exploration of the concept of compositionality, a structural feature that enables systems to scale, which Dr. Daimler argues is the future of IT infrastructure. We discuss how the way we apply AI to data is constantly changing, with data sources growing quadratically, and how this necessitates an understanding of newer forms of math such as category theory by AI specialists. Towards the end of our discussion, we move on to the subject of the adoption of AI in technologies that lives depend on, and Dr. Daimler gives his recommendation for how to engender trust amongst the larger population. Key Points From This Episode:Experience Dr. Daimler has in AI in an academic, commercial, and governmental capacity.An issue in the choices being made around how to create data that is useful in large organizations.Dr. Daimler’s work bringing heterogeneous data together to influence better business decisions.How much money is wasted on ETL processes and how bad the jobs in that field are.The difference between modularity and compositionality and why the latter is the future of IT infrastructure.How compositionality enables scalability and the need of certain branches of math to justify it.The work Dr. Daimler is doing in the field of compositionality at Connexus.Whether it is crucial to grasp these newer forms of math to achieve AI mastery.How AI systems can integrate into contexts involving human labor and empathy.The need to bring together probabilistic and deterministic AI in life and death contexts.How to get the public to trust and believe in AI-powered tech with the capacity to save lives.What AI practitioners can do to ensure they use their skillset to create a better future.Tweetables:“You can create data that doesn’t add more fidelity to the knowledge you’re looking to gain for better business decisions and that is one of the limitations that I saw expressed in the government and other large organizations.” — @ead [0:01:32]“That’s the world, is compositionality. That is where we are going and the math that supports that, type theory, categorical theory, categorical logic, that’s going to sweep away everything underneath IT infrastructure.” — @ead [0:10:23]“At the trillions of data, a trillion data sources, each growing quadratically, what we need is category theory.” — @ead [0:13:51]“People die and the way to solve that problem when you are talking about these life and death contexts for commercial airplane manufacturers or in energy exploration where the consequences of failure can be disastrous is to bring together the sensibilities of probabilistic AI and deterministic AI.” — @ead [0:24:07]“Circuit breakers, oversight, and data lineage, those are three ways that I would institute a regulatory regime around AI and algorithms that will engender trust amongst the larger population.” — @ead [0:35:12]Links Mentioned in Today’s Episode:Dr. Eric Daimler on LinkedInDr. Eric Daimler on TwitterConexus
undefined
Feb 24, 2022 • 31min

Transfer Learning & Solving Unstructured Data with Indico Data CTO Slater Victoroff

Irrespective of the application or the technology, a common problem among AI professionals always seems to be data. Is there enough of it? What do we prioritize? Is it clean? How do we annotate it? Today’s guest, however, believes that AI is not data-limited but compute-limited. Joining us to share some very interesting insights on the subject matter is Slater Victoroff, Founder and Chief Technology Officer at Indico, an unstructured data platform that enables users to build innovative, mission-critical enterprise workflows that maximize opportunity, reduce risk, and accelerate revenue. Slater explains how he came to co-found Indico Data despite a previous admission that he believed that deep learning was dead. He explains what happened that unlocked deep learning, how he was influenced by the AlexNet paper, and how Indico goes about solving the problem of unstructured data.  Key Points From This Episode:How Slater Victoroff came to found Indico and how he came to understand the value of deep learning.How Indico’s approach has evolved over time. What happened that unlocked deep learning and what inspired Slater to incorrectly believe it was over before it began.The event of the AlexNet paper published in 2012 and its influence on deep learning. Insight into the application of deep learning at Indico and their focus on human-machine interaction.What is meant by “solving the problem of unstructured data”.How Indico is reducing the price of building unstructured use cases.Thoughts on whether or not the downsizing of investment and hardware requirements of AI technology is a necessary outcome. The surprisingly low percentage of projects that succeed. Why Slater believes that AI today is not data-limited but compute-limited. Why resolving compute won’t remove the need for clean annotated data.Whether or not determining which data to prioritize is still a computer vision problem. How the refocus into transfer learning has affected Indico’s approach. What Slater is really excited about in the short and medium-term future of AI.Tweetables:“Deep learning is particularly useful for these sorts of unstructured use-cases, image, text, audio. And it’s an incredibly powerful tool that allows us to attack these use cases in a way that we fundamentally weren’t able to otherwise.” — @sl8rv [0:02:44]“By and large, AI today is not data-limited, it is compute limited. It is the only field in software that you can say that.” — @sl8rv [0:19:27]“That’s really this next frontier though: This is where transfer learning is going next, this idea ‘Can I take visual information and language information? Can I understand that together in a comprehensive way, and then give you one interface to learn on top of that consolidated understanding of the world?’” — @sl8rv [0:26:05]“We have gone from asking the question ‘Is transfer learning possible?’ to asking the question ‘What does it take to be the best in the world at transfer learning?’”. — @sl8rv [0:27:03]Links Mentioned in Today’s Episode:"Visualizing and Understanding Convolutional Networks"Slater VictoroffSlater Victoroff on TwitterIndico Data 
undefined
Feb 17, 2022 • 29min

AI Opportunity in the Space Ecosystem with Space Foundation COO Shelli Brunswick

The innovations that drive space exploration not only aid us in discovering other worlds, but they also benefit us right here on earth. Today’s guest is Shelli Brunswick, who joins us to talk about the role of AI in space exploration and how the ‘space ecosystem’ can create jobs and career opportunities on Earth. Shelli is the COO at the Space Foundation and was selected as the 2020 Diversity and Inclusion Officer and Role Model of the Year by WomenTech Network and a Woman of Influence by the Colorado Springs Business Journal. We kick our discussion off by hearing how Shelli got to her current role and what it entails. She talks about how connected the space industry has become to many others, and how this amounts to a ‘space ecosystem’, a rich field for opportunity, innovation, and commerce. We talk about the many innovations that have stemmed from space exploration, the role they play on this planet, and the possibilities this holds as the space ecosystem continues to grow. She gets into the programs at the Space Foundation to encourage entrepreneurship and the ways that innovators can direct their efforts to participate in the space ecosystem. We also explore the many ways that AI plays a role in the space ecosystem and how the AI being utilized across industries on earth will find later applications in space. Tune in today to learn more!Key Points From This Episode:Shelli’s background and career journey to her role as COO at the Space Foundation.What made Shelli decide that she wanted to occupy her current role.Using space innovation to benefit us on earth; the definition of ‘space ecosystem’.The many industries and countries that participate in the space ecosystem.Using the Center for Innovation and Education to create diversity and opportunities across industries.The role of AI in the growing space ecosystem; satellites, space exploration, GPS, and more.How AI regulates hydroponic agriculture on earth and can in space too.The many challenges of living off-world and the role AI will play.How entrepreneurship in the context of the space ecosystem is taught at the Space Commerce Institute.The spinoff commercialization and innovations that come from the space industry.How AI practitioners can point their expertise and technology into the space ecosystem.What AI will change in the world of the future and the effects of this on jobs.Tweetables:“What we really need to do is wrap it back to how that space technology, that space innovation, that investing in space, benefits us right here on planet earth and creates jobs and career opportunities.” — @shellibrunswick [0:05:52]“The sky is not the limit [for the role that] AI can play in this.” — @shellibrunswick [0:12:12]“It is the Wild West. It is exciting and, if you want to be an entrepreneur, buckle in because there is an opportunity for you!” — @shellibrunswick [0:20:36]“You can sit in the Space Symposium sessions and hear what are those governments investing in, what are those companies investing in, and how can you as an entrepreneur create a product or service that’s related to AI that helps them fill that capability gap?” — @shellibrunswick [0:22:00]Links Mentioned in Today’s Episode:Shelli Brunswick on LinkedInShelli Brunswick on TwitterThe Space FoundationThe Center for Innovation and EducationSpace Symposium
undefined
Feb 10, 2022 • 34min

Autonomous Vehicles' Impact on Cities with Lyft's Sarah Barnes

A future filled with autonomous vehicles promises to be a driving utopia. Maximum efficiency navigation decreasing traffic and congestion, safety features that drastically reduce collisions with other cars, bikes, or pedestrians, and an electric-first approach that lowers greenhouse gas emissions. But as today’s guest asserts, on the back of her extensive research the implications of a huge increase in autonomous vehicles on our streets aren’t rosy by default. Sarah Barnes works on the micro-mobility team at Lyft, and has published a variety of works that document the expected implications of more autonomous vehicles in major metropolitan areas— implications that are good, bad, and ugly. Sarah argues that without a serious focus on three transport revolutions—making transport shared, electric, AND autonomous, congestion and pollution could be here to stay. Sarah walks me through what the various implications are, and how local governments and AI practitioners can partner on policy and technology to create a future that works for everyone. 
undefined
Feb 3, 2022 • 27min

The Opportunity of NLG with Arria CTO Neil Burnett

Arria is a Natural Language Generation company that replicates the human process of expertly analyzing and communicating data insights. We caught up with their CTO, Neil Burnett, to learn more about how Arria's technology goes beyond the standard rules-based NLP approach, as well as how the technology develops and grows once it's placed in the hands of the consumer. Neil explains the huge opportunity within NLG, and how solving for seamless language based communication between humans and machines will result in increased trust and widespread adoption in AI/ML technologies.
undefined
Dec 20, 2021 • 24min

Developing Solid State LiDAR with Baraja CTO Cibby Pulikkaseril

Traditional LiDAR systems require moving parts to operate, making them less cost-effective, robust, and safe. Cibby Pulikkaseril is the Founder and CTO of Baraja, a company that has reinvented LiDAR for self-driving vehicles by using a color-changing laser routed by a prism. After his Ph.D. in lasers and fiber optic communications, Cibby got a job at a telecom equipment company, and that is when he discovered that a laser used in DWDM networks could be used to reinvent LiDAR. By joining this conversation, you’ll hear exactly how Baraja’s LiDAR technology works and what this means for the future of autonomous vehicles. Cibby also talks about some of the upcoming challenges we will face in the world of self-driving cars and the solutions his innovation offers. Furthermore, Cibby explains what spectrum scan LiDAR can offer the field of robotics more broadly. Key Points From This Episode:Cibby’s background in fiber optic communications and what led him to found Baraja.Realizing that a laser used in DWDM networks could be applied to LiDAR. Why Cibby decided that autonomous vehicles (AVs) were a good application for the laser.How the laser used by Baraja can steer a LiDAR beam without any moving parts thus making the system cheaper.Velodyne’s contributions and other innovations in the LiDAR space.A description of how the spectrum scan LiDAR works using a color-changing laser routed by a prism.The infinite resolution made possible by colored light and how AI will make use of it.Hazards around the over-proliferation of conventional LiDAR laser and how Baraja’s tech gets past this.Other challenges Cibby predicts will exist once AVs start to proliferate.How Baraja’s solid-state LiDAR technology will advance other fields of robotics.Cibby’s level of involvement in the coding and R&D at Baraja as the CTO.Technical areas that the Baraja team is researching and developing such as homodyne detection.Advice from Cibby for how to innovate in the already cutting-edge space of computer vision. Tweetables:“We started to think, what else could we do with it. The insight was that if we could get the laser light out of the fiber and into free space, then we could start doing LiDAR.” — Cibby Pulikkaseril [0:01:23]“We were excited by this idea that there was going to be a change in the future of mobility and we can be a part of that wave.” — Cibby Pulikkaseril [0:02:13]“We are the inventors of what we call spectrum scan LiDAR that is harnessing the natural phenomenon of the color of light to be able to steer a beam without any moving parts.” — Cibby Pulikkaseril [0:03:37]“We had this insight which is that if you can change the color of light very rapidly, by coupling that into prism-like optics, this can route the wavelengths based on the color and so you can steer a beam without any moving parts.” — Cibby Pulikkaseril [0:03:57]Links Mentioned in Today’s Episode:Cibby Pulikkaseril on LinkedInBaraja 
undefined
Nov 11, 2021 • 23min

Building Trustworthy Behaviomedics with Blueskeye CEO Michel Valstar

Academic turned entrepreneur Michel Valstar joins How AI Happens to explain how his behaviomedics company, Blueskeye AI, prioritizes building trust with their users. Much of the approach features data opt-ins and on-device processing, which necessarily results in less data collection. Michel explains how his team is able to continue gleaning meaningful insight from smaller portions of data than your average AI practitioner is used to. Michel Valstar on LinkedInBlueskeye AI
undefined
Nov 5, 2021 • 42min

Egocentric Perception with Facebook's Manohar Paluri

Joining us today is Senior Director at Facebook AI, Manohar Paluri. Mano discusses the biggest challenges facing the field of computer vision, and the commonalities and differences between first and third-person perception. Manohar dives into the complexity of detecting first-person perception, and how to overcome the privacy and ethical issues of egocentric technology. Manohar breaks down the mechanism underlying AI based on decision trees compared to those based on real-world data, and how they result in two different ideals: transparency or accuracy. Key Points From This Episode:Talking to Manohar Paluri, his background in IT, and how he wound up at Facebook AI. Manohar's advice on the pros and cons of doing a Ph.D.Why computer vision is so complex for machines but so simple for humans. Why the term “computer vision” is not a limiting definition in terms of the sensors used.How computer vision and perception differ. The two problems facing computer vision: recognizing entities and augmenting perception. Personalized data; generalized learning ability; and adaptability: the three problems that are responsible for the low number of entities that computer vision recognizes.Managing the direction Manohar's organization is going: egocentric vision, predicting the impact of modeling, and finding the balance between transparency and accuracy. Find out what the differences are between first- and third-person perception: intention, positioning, and long-form reasoning. The similarity between first- and third-person perception: both are trying to understand the world.Which sensors are required to predict intention: gaze and hand-object-interaction. What the privacy and ethical issues are with regard to egocentric technologies. Why Manohar believes striking a balance between accuracy and transparency will set the standard. The three prospects in AI that excite Manohar the most: the next computing platform, bringing different modalities together, and improved access to technology.  Tweetables:“What I tell many of the new graduates when they come and ask me about ‘Should I do my Ph.D. or not?’ I tell them that ‘You’re asking the wrong question’. Because it doesn’t matter whether you do a Ph.D. or you don’t do a Ph.D., the path and the journey is going to be as long for anybody to take you seriously on the research side.” — Manohar Paluri [0:02:40]“Just to give you a sense, there are billions of entities in the world. The best of the computer vision systems today can recognize in the order of tens of thousands or hundreds of thousands, not even a million. So abandoning the problem of core computer vision and jumping into perception would be a mistake in my opinion. There is a lot of work we still need to do in making machines understand this billion entity taxonomy.” — Manohar Paluri [0:11:33]“We are in the research part of the organization, so whatever we are doing, it’s not like we are building something to launch over the next few months or a year, we are trying to ask ourselves how does the world look like three, five, ten years from now and what are the technological problems?” — Manohar Paluri [0:20:00]“So my hope is, once you set a standard on transparency while maintaining the accuracy, it will be very hard for anybody to justify why they would not use such a model compared to a more black-box model for a little bit more gain in accuracy.” — Manohar Paluri [0:32:55]Links Mentioned in Today’s Episode:Manohar Paluri on LinkedInFacebook AI Research WebsiteFacebook AI Website: Ego4D
undefined
Oct 28, 2021 • 29min

Responsible AI Economics with Katya Klinova & The Partnership on AI

In recent years, the focus of AI developers has been to implement technologies that replace basic human labor. Talking to us today about why this is the wrong application for AI (right now), is Katya Klinova, the Head of AI, Labor, and the Economy at The Partnership on AI. Tune in to find out why replacing human labor doesn't benefit the whole of humanity, and what our focus should be instead.  We delve into the threat of "so-so technologies" and what the developer's role should be in approaching ethical vendors and looking after the workers supplying them with data. Join us to find out more about how AI can be used to better the whole of society if there’s a shift in the field’s current aims. Key Points From This Episode:An introduction to Katya Klinova, Head of Al, Labor and the Economy at The Partnership on AI.How her expectations of the world after her undergraduate degree shaped her.Pursuing a degree in economics to understand how AI impacts labor and economics.The role of The Partnership on AI in dissipating technological gains.Who is impacted when AI is introduced to a market: the consumers and the workers.How different companies are deficient in the ways they benefit everyone. Find out what the “threat of so-so technology” is.Should people become shareholders in AI technology that they helped to train?How capitalism incentivizes “so-so technologies”. The role of developers in selecting vendors and responsible sourcing. Why it's important to realize that data labelers are employees and not just numbers.Shifting the focus of AI from automation to complementarity. Why now is not the time to be replacing human labor.  Tweetables:“Creating AI that benefits all is actually a very large commitment and a statement, and I don't think many companies have really realized or thought through what they're actually saying in the economic terms when they're subscribing to something like that.” — @klinovakatya [0:09:45] "It’s not that you want to avoid all kinds of automation, no matter what. Automation, at the end of the day, has been the force that lifted living conditions and incomes around the world, and has been around for much longer than AI." — @klinovakatya [0:11:28] “We compensate people for the task or for their time, but we are not necessarily compensating them for the data that they generate that we use to train models that can displace their jobs in the future.” — @klinovakatya [0:14:49] "Might we be automating too much for the kind of labor market needs that we have right now?" — @klinovakatya [0:23:14] ”It’s not the time to eliminate all of the jobs that we possibly can. It’s not the time to create machines that can match humans in everything that they do, but that’s what we are doing.” — @klinovakatya [0:24:50] Links Mentioned in Today’s Episode:Katya Klinova on LinkedIn"Automation and New Tasks: How Technology Displaces and Reinstates Labor"The Partnership on AI: Responsible Sourcing
undefined
Oct 21, 2021 • 30min

Moxie the Robot & Embodied CTO Stefan Scherer

In this episode, we talk to Stefan Scherer (CTO of Embodied) about why he decided to focus on the more nuanced challenge of developing children’s social-emotional skills. Stefan takes us through how encouraging children to mentor Moxie (a friendly robot) through social interaction helps them develop their interpersonal relationships. We dive into the relevance of scripted versus unscripted conversation in different AI technologies, and how Embodied taught Moxie to define abstract concepts such as "kindness". Key Points From This Episode:Welcome to Stefan Scherer, CTO of Embodied and lead researcher and developer of Embodied's SocialX™ technology, Moxie.The goal of Embodied: using a natural mode of communication to support children’s social development. Mentoring Moxie: how Moxie teaches children social-emotional learning without being a teacher. Why Stefan and Embodied focused on the challenge of social-emotional skills, not STEM. Developing a technology that captures the infinite answers to social-emotional questions: using neural networks and sentiment analysis.How using Few-shot learning reduced the amount of data needed to train Moxie.Why it's important to make the transition between freer- and scripted conversations seamless.How the percentage of scripted versus non-scripted conversation differs based on the context of the technology.  Discover how Moxie adapts to children’s changing needs and desires. How Moxie as a springboard in teaching children to form long-term relationships. The hardware behind Moxie: the ethical considerations around home devices, and data protection.Why Moxie looks the way it does: making it affordable. Tweetables:“Human behavior is very complex, and it gives us a window into our soul. We can understand so much more than just language from human behavior, we can understand an individual's wellbeing and their abilities to communicate with others.” — Stefan Scherer [0:01:04]"It is not sufficient to work on the easy challenges at first and then expand from there. No, as a startup you have to tackle the hard ones first because that's where you set yourself apart from the rest." — Stefan Scherer [0:04:53]“Moxie comes into the world of the child with the mission to basically learn how to be a good friend to humans. And Moxie puts the child into this position of teaching Moxie about how to do that.” — Stefan Scherer [0:17:40]"One of the most important aspects of Moxie is that Moxie doesn't serve as the destination, Moxie is really a springboard into life." — Stefan Scherer [0:18:29]“We did not want to overengineer Moxie, we really wanted to basically afford the ability to have a conversation, to be able to multimodally interact, and yet be as frugal with the amount of concepts that we added or the amount of capabilities that we added.” — Stefan Scherer [0:27:17]Links Mentioned in Today’s Episode:See Moxie in ActionStefan Scherer on LinkedInEmbodied Website

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode