How AI Happens cover image

How AI Happens

Latest episodes

undefined
Aug 18, 2022 • 25min

Lead Full Stack AI Engineer Becks Simpson

Tune in to hear more about Becks’ role as a lead full stack AI engineer at Rogo, how they determine what should and should not be added into the product tier for deep learning, the types of questions you should be asking along the investigation-to-product roadmap for AI and machine learning products, and so much more!Key Points From This Episode:An introduction to today’s guest, Lead Full Stack AI Engineer, Becks Simpson. Becks’ cover band Des Confitures made up of machine learning engineers and other academics. Becks’ career background and how she ended up in her role at Rogo. How Rogo enables people to unlock or make sense of unstructured or unorganized data.Why Becks’ role could be compared to that of an AI Swiss Army Knife. How they determine what should and should not be added to the product tier for deep learning. Becks’ experience of having to give someone higher up a reality check about the technical needs of their product.  Why Becks believes there are so many nontechnical hats you need to wear as an AI or ML expert. Thoughts on the trend of product managers being taught how to do AI but not AI people being taught to do product management.The importance of bringing data about data into the conversation. The types of questions you should be asking and where the answers to understanding your dataset will then take you. How the investigation-to-product roadmap is not something you would learn in academia for AI machine learning and why it should be.  Thoughts as to why it is so common for someone to have one foot in the industry and one foot in academia.  An area of AI machine learning that Becks is truly excited about: off the shelf models. Tweetables:“People think that [AI] can do more than what it can and it has only been the last few years where we realized that actually, there’s a lot of work to put it in production successfully, there’s a lot of catastrophic ways it can fail, there are a lot of considerations that need to be put in.” — Becks Simpson [0:11:39]“Make sure that if you ever want to put any kind of machine learning or AI or something into a product, have people who can look at a road map for doing that and who can evaluate whether it even makes sense from an ROI business standpoint, and then work with the teams.” — Becks Simpson [0:12:55]“I think for the people who are in academia, a lot of them are doing it to push the needle, and to push the state of the art, and to build things that we didn’t have before and to see if they can answer questions that we couldn’t answer before. Having said that, there’s not always a link back to a practical use case.” — Becks Simpson [0:20:25]“Academia will always produce really interesting things and then it’s industry that will look at whether or not they can be used for practical problems.” — Becks Simpson [0:21:59]Links Mentioned in Today’s Episode:Becks Simpson RogoDes Confitures  Montreal Institute of Learning AlgorithmsSama
undefined
Aug 11, 2022 • 31min

Neural Rendering with fxguide Co-Founder Dr. Mike Seymour

 Dr. Seymour aims to take cutting-edge technology and apply it to the special effects industry, such as with the new AI platform, PLATO. He is also a lecturer at the University of Sydney and works as a consultant within the special effects industry. He is an internationally respected researcher and expert in Digital Humans and virtual production, and his experience in both visual effects and pure maths makes him perfect for AI-based visual effects. In our conversation we find out more about Dr. Seymour’s professional career journey, and what he enjoys the most about working as both a researcher and practitioner. We then get into all the details about AI in special effects as we learn about Digital Humans, the new PLATO platform, why AI dubbing is better, the biggest challenges facing the application of AI in special effects.Key Points From This Episode:Dr. Seymour explains his background and professional career journey. Why he enjoys bridging the gap between researcher and practitioner.An outline of the different topics that Dr. Seymour lectures in and what he is currently working on.He explains what he means by the term ‘digital humans’ and provides examples.The special effects platform, PLATO, he is currently working on and what it will be used for.An explanation of how PLATO was used in the Polish movie, The Champion.He explains the future goals and aims for auto-dubbing using AI and visual effects.Why auto-dubbing procedure will not add or encumber existing processes of making a movie. Reasons why AI auto-dubbing is better than traditional dubbing.Whether this is a natural language processing challenge or more of a creative filmmaking challenge.A discussion about why new technologies take long to be applied to real-world scenarios.How the underlying process of PLATO are different from what is required to make a deepfake video. His approach to overcoming challenges facing the PLATO platform. Other areas of the entertainment industry Dr. Seymour expects AI to be disruptive.Tweetables:“In the film, half the actors are the original actors come back to just re-voice themselves, half aren’t. In the film hopefully, when you watch it, it’s indistinguishable that it wasn’t actually filmed in English. — @mikeseymour [0:10:15]“In our process, it doesn’t apply because if you were saying in four words what I’d said in three, it would just match. We don’t have to match the timing, we don’t have to match the lip movement or jaw movement, it all gets fixed.” — @mikeseymour [0:15:15]“My attitude is, it’s all very well for us to get this working in the lab, but it has to work in the real world.” — @mikeseymour [0:19:56]Links Mentioned in Today’s Episode:Dr. Mike Seymour on LinkedInDr. Mike Seymour on TwitterDr. Mike Seymour on Google Scholar University of SydneyfxguideDr. Paul DebevecPixarDarryl Marks on LinkedInAdapt EntertainmentPLATO Demonstration LinkThe ChampionPinscreenRespeecherRob Stevenson on LinkedInRob Stevenson on TwitterSama
undefined
Jul 28, 2022 • 40min

PwC UK's AI for Good Lead Maria Luciana Axente

 Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology. She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI. Maria works at PwC, where her work focuses on the operationalization of AI, and data across the firm. She also plays a vital role in advising government, regulators, policymakers, civil society, and research institutions on ethically aligned AI public policy. In our conversation, we talk about the importance of building responsible and ethical AI, while leveraging technology to build a better society. We learn why companies need to create a culture of ethics for building AI, what type of values encompasses responsible technology, the role of diversity and inclusion, the challenges that companies face, and whose responsibility it is. We also learn about some basic steps your organization can take and hear about helpful resources available to guide companies and developers through the process.Key Points From This Episode:Maria’s professional career journey and her involvement in various AI organizations. The motivation which drives AI and machine learning professionals in their careers.How to create and foster a system that instills people with positivity. Examples of companies that have successfully fostered a positive and ethical culture.What are good values for building responsible and ethical technology. We learn about the values the responsible AI toolkit prescribes.Some of the challenges faced when building responsible and ethical technology.An outline of the questions a practitioner can ask to ensure operation by the universal ethics.She shares some helpful resources concerning ethical guidelines for AI. Why diversity and inclusion are essential to building technology. Whose responsibility it should be to ensure the ethical and inclusive development of AI.We wrap up the episode with a takeaway message that Maria has for listeners. Tweetables:“How we have proceeded so far, via Silicon Valley, 'move fast and break things.' It has to stop because we are in a time when if we continue in the same way, we're going to generate more negative impacts than positive impacts.” — @maria_axente [0:10:19]“You need to build a culture that goes above and beyond technology itself.” — @maria_axente [0:12:05]“Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean both those who build AI and those who use AI.” — @maria_axente [0:16:39]“You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered.” — @maria_axente [0:29:34]“AI doesn't have a technical problem. AI has a human problem.” — @maria_axente [0:32:34]Links Mentioned in Today’s Episode:Maria Luciana Axente on LinkedInMaria Luciana Axente on TwitterPwC UKPwC responsible AI toolkitSama
undefined
Jul 21, 2022 • 23min

Building Responsible AI with Mieke de Ketelaere

 The gap between those creating AI systems and those using the systems is growing. After 27 years on the other side of technology, Mieke decided that it was time to do something about the issues that she was seeing in the AI space. Today she is an Adjunct Professor for Sustainable Ethical and Trustworthy AI at Vlerick Business School, and during this episode, Mieke shares her thoughts on how we can go about building responsible AI systems so that the world can experience the full range of benefits of AI.Key Points From This Episode:An overview of Mieke’s educational and career background.Elements of the AI space that have and haven’t changed since Mieke studied robotics AI in 1992.What drew Mieke back into the AI space five years ago.The importance of understanding the limitations of AI.Mieke shares her thoughts on how to build responsible AI systems.The challenges of building responsible AI systems.Why the European AI Act isn’t able to address the complexities of the AI sector.The missing link between the people creating AI systems and the people using them.Exploring the issue of deep fakes.The role of AI Translators, and an overview of the AI Translator course available in Belgium.Tweetables:“The compute power had changed, and the volumes of data had changed, but the [AI] principles hadn't changed that much. Only some really important points never made the translation.” — @miekedk [0:02:03]“[AI systems] don't automatically adapt themselves. You need to have your processes in place in order to make sure that the systems adapt to the changing context.” — @miekedk [0:04:06]“AI systems are starting to be included into operational processes in companies, but only from the profit side, not understanding that they might have a negative impact on people especially when they start to make automated decisions.” — @miekedk [0:04:52]“Let's move out of our silos and sit together in a multidisciplinary debate to discuss the systems we're going to create.” — @miekedk [0:07:52]Links Mentioned in Today’s Episode:Mieke de KetelaereMieke's BooksThe European AI ActSama
undefined
Jul 14, 2022 • 21min

Allied Digital CDO Utpal Chakraborty

Today, on How AI Happens, we are joined by the Chief Digital Officer at Allied Digital, Utpal Chakraborty, to talk all things AI at Allied Digital. You’ll hear about Utpal’s AI background, how he defines Allied Digital’s mission, and what Smart Cities are and how the company captures data to achieve them, as well as why AI learning is the right approach for Smart Cities. We also discuss what success looks like to Utpal and the importance of designing something seamless for the end-user. To find out why customer success is Allied Digital’s success, tune in today! Key Points From This Episode:A brief overview of Utpal’s background and how he ended up in his current role at Allied. How Utpal would characterize Allied Digital’s mission. The definition of Smart Cities.How Allied Digital is able to capture the data needed to make a city a Smart City. What made it clear to Utpal that AI machine learning was the right approach for the Smart City services.Insight into what success and an end goal looks like for Utpal. Why it is everyone’s job to design something that is seamless for the end-user. A look at what Utpal thinks has been truly disruptive in the AI space. Tweetables:“I looked at how we can move this [Smart City] tool ahead and that’s where the AI machine learning came into the picture.” — @utpal_bob [0:11:11]“[Allied Digital] wants to bring that wow factor into each and every service product and solution that we provide to our customers and, in turn, that they provide to the industry.” — @utpal_bob [0:16:27]Links Mentioned in Today’s Episode:Utpal Chakraborty on LinkedInUtpal Chakraborty on TwitterAllied Digital ServicesSama
undefined
Jul 7, 2022 • 26min

Calibrate Ventures Partners Jason Schoettler, Kevin Dunlap

 In today’s conversation, we learn about Jason and Kevin’s career backgrounds, the potential that the deep technology sector has, what ideas excite them the most, the challenges when investing in AI-based companies, what kind of technology is easily understood by the consumer, what makes a technological innovation successful, and much more. Key Points From This Episode:Background and professional paths that led Jason and Kevin to their current roles.Reasons behind starting Calibrate Ventures and originally entering the sector.How the deep-technology sector can solve current problems.What kind of new technology and innovation they get most excited about.The most essential quality of innovative technology: what people want.Rundown of the diverse, experienced, and talented team they work with.Jason shares an example of a technological innovation that solved a real-world problem.How they differentiate the approach when investing in AI companies.What influences the longer sale cycles in the AI technology sector.An example of the challenges when integrating AI technology with the business side.The benefits that Jason and Kevin’s experience adds to the company.Some examples of the kind of technology that translates well to the consumer.We find out what their opinion is about automation and augmentation.We wrap up the show with some advice from Jason and Kevin for AI entrepreneurs. Tweetables:“I think for me personally, the cycle-time was very long. You work on projects for a very long time. As an investor, I get to see new ideas and new concepts every day. From an intellectual curiosity standpoint, there couldn’t be a better job.” — Kevin Dunlap [0:05:17]“So that lights me up. When I hear somebody talk about a problem that they are looking to solve and how their technology can do it uniquely with some type of competitive or differentiated advantage we think is sustainable.” — Jason Schoettler [0:08:14]“The things that really excite us are not, where can we do better than humans but first, where are there not humans work right now where we need humans doing work.” — Jason Schoettler [0:20:44]“Anytime that someone is doing a job that is dangerous, that is able to be solved with technology, I think we owe it to ourselves to do that.” — Kevin Dunlap [0:22:39]Links Mentioned in Today’s Episode:Jason Schoettler on LinkedInKevin Dunlap on LinkedInCalibrate VenturesCalibrate Ventures on LinkedInGrayMatter RoboticsGrayMatter Robotics on LinkedIn
undefined
Jun 23, 2022 • 37min

x.AI Founder Dennis Mortensen

Whether you’re building AI for self-driving cars or for scheduling meetings, it’s all about prediction! In this episode, we’re going to explore the complexity of teaching the human power of prediction to machines.Key Points From This Episode:Dennis shares an overview of what he has spent his career focusing on.How Dennis defines an intelligent agent.The role of prediction in the AI space.Dennis explains the mission that drove his most recent entrepreneurial venture, x.ai (acquired by Bizzabo).Challenges of transferring humans’ capacity for prediction and empathy to machines.Some of Dennis’s key learnings from his time working on the technology for x.ai.Unrealistic expectations that humans have of machines.How we can teach humans to have empathy for machines. Dennis’s hope for the next generation in terms of their approach to AI.A lesson Dennis learned from his daughter about AI and about human nature.What Dennis is most excited about in the AI space.Tweetables:“The whole umbrella of AI is really just one big prediction engine.” — @DennisMortensen [0:03:38]“Language is not a solved science.” — @DennisMortensen [0:06:32]“The expectation of a machine response is different to that of a human response to the same question.” — @DennisMortensen [0:11:36]Links Mentioned in Today’s Episode:Dennis Mortensen on LinkedInBizzabo [Formerly x.ai]
undefined
Jun 16, 2022 • 29min

Unity SVP of AI Danny Lange: The Industrial Metaverse

Leading AI companies are adopting simulation, synthetic data and other aspects of the metaverse at an incredibly fast rate, and the opportunities for AI/machine learning practitioners are endless. Tune in today for a fascinating conversation about how the real world and the virtual world can be blended in what Danny refers to as “the real metaverse.”Key Points From This Episode:The career path that led Danny to his current role as Senior Vice President of AI at Unity.Data; the machine learning challenge that Danny has dealt with in many different forms throughout his career. An explanation of how Unity uses data to make game recommendations to players. How deep learning embedding works.What drew Danny to Unity.The benefits of using synthetic data.How Unity ensures that the synthetic data they create is as unbiased as possible. The importance of anchoring your synthetic data to a real-world counterpart.Danny’s thoughts on the potential of the Metaverse.Examples of the career opportunities that the Metaverse has opened up for AI/machine learning practitioners.Tweetables:“When you play a game, I don’t need to know your name, your age. I don’t need to know where you live, or how much you earn. All that really matters is that my system needs to learn the way you play and what you are interested in in your gameplay, to make excellent recommendations for other games. That’s what drives the gaming ecosystem.” — @danny_lange [0:03:16]“Deep learning embedding is something that is really driving a lot of progress right now in the machine learning AI space.” — @danny_lange [0:06:04]“The world is built on uncertainty and we are looking at simulation in an uncertain world, rather than in a Newtonian, deterministic world.” — @danny_lange [0:23:23]Links Mentioned in Today’s Episode:Danny Lange on LinkedInUnity
undefined
Jun 9, 2022 • 27min

CarbonChain Head of Data & Machine Learning Archy De Berker

In today’s episode, Archy De Berker, Head of Data and Machine learning at CarbonChain, explains how he and his team calculate carbon footprints, some of the challenges that they face in this line of work, the most valuable use of machine learning in their business (and for climate change solutions as a whole), and some important lessons that he has learned throughout his diverse career so far! Key Points From This Episode:An overview of Archy’s career trajectory, from academic neuroscientist to Head of Data and Machine learning at CarbonChain.The foundational mission of CarbonChain.Archy explains how machine learning can be applied in the context of energy storage as a climate change solution. Industries that CarbonChain focuses on.How Archy and his team calculate carbon footprints.A key challenge for carbon footprinting.Where machine learning provides the most value for CarbonChain.The importance of the field of document understanding. A story from Archy’s time at Element AI that highlights the value of having technical people working as close to the design and data generation as possible. Why Archy chose to move into the product management realm.Additional ways that machine learning can help solve climate change issues. Tweetables:“We build automated carbon footprinting for the world’s most polluting industries. We’re really trying to help people who are buying things from carbon-intense industries figure out where they can get lower carbon versions of the same kind of products.” — @ArchydeB [0:02:14]“A key challenge for carbon footprinting is that you need to be able to understand somebody’s business in order to tell them what the carbon footprint of their activities is.” — @ArchydeB [0:13:01]“Probably the most valuable place for machine learning in our business is taking all this heterogeneous customer data from all these different systems and being able to map it onto a very rigid format that we can then retrieve information from our databases for.” — @ArchydeB [0:13:24]Links Mentioned in Today’s Episode:Archy de Berker on LinkedInCarbon Chain
undefined
Jun 3, 2022 • 34min

Privacy in AI with MATR Ventures Partner Hessie Jones

MATR Ventures Partner, Hessie Jones, is dedicated to solving issues around AI ethics as well as diversity & representation in the space. In our conversation with her, she breaks down how she came to beleive something was wrong with the way companies harvest & use data, and the steps she has taken towards solving the privacy problem. We discuss the danger of intentionally convoluted terms and conditions and the problem with synthetic data. Tune in to hear about the future of biometrics and data privacy and the emerging technologies using data to increase accountability.Key Points From This Episode:Hessie Jones’ background: from marketing to her current role at MATR Ventures.Hessie’s focus on AI ethics and privacy, and diversity in the venture capital space.Her mission to provide equal access to programs and investment.What inspired her to tackle the problem of AI ethics and privacy.The consequences of Snowden and the responsibility of tech to enforce customer privacy.Hessie’s path of seeking the solution to the privacy problem.The danger of blanketed terms and conditions.The problem with synthetic data.Crass uses of facial recognition.Emerging technologies using data to increase accountability.The future of biometrics and data privacy.The mission of MATR Ventures and who they invest in.Examples of technologies MATR Ventures employs to ensure accountability.Tweetables:“Venture capital is not immune to the diversity problems that we see today.” — Hessie Jones [0:05:04]“We should separate who you are as an individual from who you are as a business customer.” — Hessie Jones [0:08:49]“The problem I see with synthetic data is the rise of deep fakes.” — Hessie Jones [0:21:24]“The future is really about data that’s not shared, or if it’s shared, it’s shared in a way that increases accountability.” — Hessie Jones [0:26:43]Links Mentioned in Today’s Episode:Hessie Jones on LinkedInMATR VenturesResponsible AI

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode