The Shifting Privacy Left Podcast cover image

The Shifting Privacy Left Podcast

Latest episodes

undefined
Apr 11, 2023 • 35min

S2E14: Addressing Privacy with Static Analysis Techniques Like ‘Taint-Tracking’ & ‘Data Flow Analysis’ with Suchakra Sharma (Privado.ai)

This week, we welcome Suchakra Sharma, Chief Scientist at Privado.ai, where he builds code analysis tools for data privacy & security. Previously, he earned his PhD in Computer Engineering from Polytechnique Montreal, where he worked on eBPF Technology and hardware-assisted tracing techniques for OS Analysis. In this conversation, we delve into Suchakra’s background in shifting left for security and how he applies traditional, tested static analysis techniques — such as 'taint tracking' and 'data flow analysis' — for use on large code bases at scale to help fix privacy leaks right at the source.---------Thank you to our sponsor, Privado, the developer friendly privacy platform.---------Suchakra aligns himself with the philosophical aspects of privacy and wishes to work on anything that helps in limiting the erosion of privacy in modern society, since privacy is fundamental to all of us. These kinds of needs have always been here, and as societies have advanced, this is a time when we require more guarantees of privacy. After all, it is humans that are behind systems and it is humans that are going to be affected by the machines that we build. Check out this fascinating discussion on how to shift privacy left in your organization.Topics Covered:Why Suchakra was interested in privacy after focusing on static code analysis for securityWhat 'shift left' means and lessons learned from the 'shift security left' movement that can be applied to 'shift privacy left' effortsSociological perspectives on how humans developed a need for keeping things 'private' from othersHow to provide engineering-focused guarantees around privacy today & what the role should be of engineers within this 'shift privacy left' paradigmSuchakra's USENIX Enigma talk & discussion of 'taint tracking' & 'data flow analysis' techniquesWhich companies should build in-house tooling for static analysis, and which should be outsourcing to experienced vendors like PrivadoHow to address 'privacy bugs' in code; why it's important to have an 'auditor's mindset;' &, why we'll see 'Privacy Bug Bounty Programs' soonSuchakra's advice to engineering managers to move the needle on privacy in their orgsResources Mentioned:Join Privado's Slack CommunityReview Privado's Open Source Code Scanning ToolsGuest Info:Connect with Suchakra on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Apr 4, 2023 • 49min

S2E13: Diving Deep into Fully Homomorphic Encryption (FHE) with Kurt R. Rohloff (Duality Technologies)

I am delighted to welcome this week’s guest, Kurt Rohloff. Kurt is the CTO and Co-Founder of Duality Technologies, a privacy tech company that enables organizations to leverage data across their ecosystem and generate joint insights for better business while preserving privacy. Kurt was also Co-Founder of the OpenFHE Homomorphic Encryption Software Library that enables practical and usable privacy and collaborative data analytics.He's successfully led teams that are developing, transitioning, and applying first-in-the-world technology capabilities for both the Department of Defense as well as for commercial use. Kurt specializes in generating, developing, and commercializing innovative secure computing technologies with a focus on privacy and AI/ML at scale. In this episode, we discuss use cases for leveraging Fully Homomorphic Encryption (FHE) and other PETs.In a previous episode, we spoke about federated learning; and in this episode, we learn how to achieve secure federated learning using fully homomorphic encryption (FHE) techniques.Kurt has been focused on and supported homomorphic encryption since it was first discovered, including his involvement in one of the seminal projects, funded by DARPA, where he ran an implementation team, called PROCEED.FHE, as opposed to other kinds of privacy technologies, is more general and malleable. As each organization has different needs when it comes to data collaboration, Duality Technologies offers three separate models for collaboration, which enable organizations to secure sensitive data while still allowing different types of sharing.Topics Covered:How companies can gain utility from a dataset while protecting the privacy of individuals or entitiesHow FHEs help with fraud prevention, How FHEs help with fraud prevention, secure investigations, real-world evidence & genome-wide association studiesUse cases for the three collaboration models Duality offers: Single Data Set, Horizontal Data Analysis, and Vertical Data AnalysisComparison & trade-offs involved between federated learning and homomorphic encryptionProliferation of FHE StandardsOpenFHE.org, the leading open source library for implementations of fully homomorphic encryption protocolsResources Mentioned:Review the OpenFHE encryption software libraryLearn about DualityGuest Info:Connect with Kurt on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Mar 28, 2023 • 55min

S2E12: 'Building Powerful ML Models with Privacy & Ethics' with Katharine Jarmul (ThoughtWorks)

This week, I'm joined by Katharine Jarmul, Principal Data Scientist at Thoughtworks & author of the the forthcoming book, "Practical Data Privacy: Enhancing Privacy and Security in Data." Katharine began asking questions similar to those of today's ethical machine learning community as a university student working on her undergrad thesis during the war in Iraq. She focused that research on natural language processing and investigated the statistical differences between embedded & non-embedded reporters. In our conversation, we discuss ethical & secure machine learning approaches, threat modeling against adversarial attacks, the importance of distributed data setups, and what Katharine wants data scientists to know about privacy and ethical ML.Katharine believes that we should never fall victim to a 'techno-solutionist' mindset where we believe that we can solve a deep societal problem simply with tech alone. However, by solving issues around privacy & consent with data collection, we can more easily address the challenges with ethical ML.  In fact, ML research is finally beginning to broaden and include the intersections of law, privacy, and ethics. Katharine anticipates that data scientists will embrace PETs that facilitate data sharing in a privacy-preserving way; and, she evangelizes the un-normalization of sending ML data from one company to another. Topics Covered:Katharine's motivation for writing a book on privacy for a data scientist audience and what she hopes readers will learn from itWhat areas must be addressed for ML to be considered ethicalOverlapping AI/ML & Privacy goalsChallenges with sharing data for analyticsThe need for data scientists to embrace PETsHow PETs will likely mature across orgs over the next 2 yearsKatharine's & Debra's favorite PETsThe importance of threat modeling ML models: discussing 'adversarial attacks' like 'model inversion' & 'membership inference' attacksWhy companies that train LLMs must be accountable for the safety of their modelsNew ethical approaches to data sharingWhy scraping data off the Internet to train models is the harder, lazier, unethical way to train ML modelsResources Mentioned:Pre-order the forthcoming book: "Practical Data Privacy"Subscribe to Katharine’s newsletter: Probably PrivateGuest Info:Follow Katharine on LinkedInFollow Katharine on TwitterSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Mar 21, 2023 • 53min

S2E11: Lessons Learned as a Privacy Engineering Manager with Menotti Minutillo (ex-Twitter & Uber)

This week, we gain insights into the profession of privacy engineering with guest Menotti Minutillo, a Sr. Privacy Engineering Manager with 15+ years of experience leading critical programs and product delivery at companies like Uber, Thrive Global & Twitter. He started his career in 2007 on Wall Street as a DevOps & Infrastructure Engineer; and now, Menotti is a sought-after technical privacy expert and Privacy Tech Advisor. In this conversation, we discuss privacy engineering approaches that have work, the skillsets required for privacy engineering, and the current climate for landing privacy engineering roles.Menotti sees privacy engineering as the practice of building or improving info systems to advance a set of privacy goals. It's like a 'layer cake' in that you have different protections and risk reductions based on threat modeling, as well as different specialization capabilities for larger orgs.It makes a lot of sense that he's held weaving roles from company to company. His journey into privacy engineering was originally 'adjacent work' and today, he shares lessons learned from taking a PET like differential privacy from the lab to systematizing it into an organization to deploying it in the real-world. In this episode, we delve into tools, technical processes, technical standards, the maturing landscape for privacy engineers, and how the success of privacy is coupled with the success of each product shipped.Topics Covered:How Menotti found his way to managing privacy engineering teamsMenotti's definition of 'privacy engineer' & the skillsets requiredWhat it was like to work at Uber & Twitter, which have multiple privacy engineering teamsBest practices for setting up teams & deploying solutionsPrivacy outcomes that privacy engineers should keep top of mindBest practices for privacy architectureMenotti positive experience while at Uber working with Privacy Researchers from UC Berkeley to take differential privacy from the lab to a real-world deploymentLessons learned from times of transition, including while at Twitter during Musk's takeover Whether privacy was a 'zero interest rate bet,' and what that means for privacy engineering roles given current economic realitiesResources Mentioned:Check out the PEPR conferenceRead 'Was Privacy a Zero Interest Rate Bet?'Guest Info:Follow Menotti on LinkedInConnect with Menotti on MastadonSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Mar 14, 2023 • 46min

S2E10: Leveraging Synthetic Data and Privacy Guarantees with Lipika Ramaswamy (Gretel.ai)

This week, we welcome Lipika Ramaswamy, Senior Applied Scientist at Gretel AI, a privacy tech company that makes it simple to generate anonymized and safe synthetic data via APIs. Previously, Lipika worked as a Data Scientist at LeapYear Technologies, and was the Machine Learning Researcher at Harvard University's Privacy Tools Project.Lipika’s interest in both machine learning and privacy comes from her love of math and things that can be defined with equations. Her interest was piqued in grad school and accidentally walked into a classroom holding a lecture on Applying Differential Privacy for Data Science. The intersection of data combined with the privacy guarantees that we have available today has kept her hooked ever since.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------There's a lot to unpack when it comes to synthetic data & privacy guarantees, as she takes listeners on a deep dive of these compelling topics. Lipika finds elegant how privacy assurances like differential privacy revolve around math and statistics at their core. Essentially, she loves building things with 'usable privacy' & security that people can easily use. We also delve into the metrics tracked in the Gretel Synthetic Data Report, which assesses both 'statistical integrity' & 'privacy levels' of a customer's training data.Topics Covered:The definition of 'synthetic data,' & good use casesThe process of creating synthetic dataHow to ensure that synthetic data is 'privacy-preserving'Privacy problems that may arise from overtraining ML modelsWhen to use synthetic data rather than other techniques like tokenization, anonymization, aggregation & othersExamples of good use cases vs poor use cases for using synthetic dataCommon misperceptions around synthetic dataGretel.ai's approach to 'privacy assurance,' including a focus on 'privacy filters,' which prevent some privacy harms outputted by LLMsHow to plug into the 'synthetic data' communityWho bears the responsibility for educating the public about new technology like LLMs and potential harmsHighlights from Gretel.ai's Synthesize 2023 conferenceResources Mentioned:Join Gretel's Synthetic Data Community on DiscordWatch Talks on Synthetic Data on YouTubeGuest Info:Connect with Lipika on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Mar 7, 2023 • 47min

S2E9: Personalized Noise, Decaying Photos, & Digital Forgetting with Apu Kapadia (Indiana University Bloomington)

In this episode, I'm delighted to welcome Apu Kapadia, Professor of Computer Science and Informatics at the School of Informatics and Computing, Indiana University. His research is focused on the privacy implications of ubiquitous cameras and online photo sharing. More recently, he has examined the cybersecurity and privacy challenges posed by AI-based smart voice assistants that can listen and converse with us.Prof. Kapadia has been excited by anonymized networks since childhood. He has memories of watching movies where a telephone call was being routed around the world so that it became impossible to trace. What really fascinates him now is how much there is to understand mathematically and technically in order to measure that amount of privacy. In more recent years, he has been interested in privacy in the context of digital photography and audio shared online and on social media. His current research is focused on understanding privacy issues around photo sharing in a world with cameras everywhere.In this conversation, we delve into how users are affected once privacy violations have already occurred, the implications of privacy of children when it comes to parents sharing photos of them online, the fascinating future of trusted hardware that will help ensure "digital forgetting," and how all of this is a people problem as much as it is a technical problem.Topics Covered:Can we trick 'automated speech recognition' (ASR)?Apu's co-authored paper: 'Defending Against Microphone-based Attacks with Personalized Noise'What Apu means by 'tangible privacy' & what design approaches he recommendsApu's view on 'bystander privacy' & the approach that he took in his researchHow to leverage 'temporal redactions' via 'trusted hardware' for 'digital forgetting'Apu’s surprising finding in his research on "interpersonal privacy" in the context of social media and photosGuidance for developers building privacy-respective social media appsApu's research focused on cybersecurity & privacy for marginalized & vulnerable populationsHow we can make privacy & security more 'useable'Resources Mentioned:Read Defending Against Microphone-Based Attacks with Personalized NoiseRead Decaying Photos for Enhanced Privacy: User Perceptions Towards Temporal Redactions and 'Trusted' PlatformsGuest Info:Follow Prof. Kapadia on LinkedInSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Feb 28, 2023 • 41min

S2E8: Leveraging Federated Learning for Input Privacy with Victor Platt

Victor Platt is a Senior AI Security and Privacy Strategist who previously served as Head of Security and Privacy for privacy tech company, Integrate.ai. Victor was formerly a founding member of the Risk AI Team with Omnia AI, Deloitt’s artificial intelligence practice in Canada. He joins today to discuss privacy enhancing technologies (PETs) that are shaping industries around the world, with a focus on federated learning.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------Victor views PETs as functional requirements and says they shouldn’t be buried in your design document as nonfunctional obligations. In his work, he has found key gaps where organizations were only doing “security for security’s sake.” Rather, he believes organizations should be thinking about it at the forefront. Not only that, we should all be getting excited about it because we all have a stake in privacy.With federated learning, you have the tools available to train ML models on large data sets with precision at scale without risking user privacy. In this conversation, Victor demystifies what federated learning is, describes the 2 different types: at the edge and across data silos, and explains how it works and how it compares to traditional machine learning.We deep dive into how an organization knows when to use federated learning, with specific advice for developers and data scientists as they implement it into their organizations.Topics Covered:What 'federated learning' is and how it compares to traditional machine learningWhen an organization should use vertical federated learning vs horizontal federated learning, or instead a hybrid versionA key challenge in 'transfer learning': knowing whether two data sets are related to each other and techniques to overcome this, like 'private set intersection'How the future of technology will be underpinned by a 'constellation of PETs' The distinction between 'input privacy' vs. 'output privacy'Different kinds of federated learning with use case examplesWhere the responsibility for adding PETs lies within an organizationThe key barriers to adopting federated learning and other PETs within different industries and use casesHow to move the needle on data privacy when it comes to legislation and regulationResources Mentioned:Take this outstanding, free class from OpenMined:  Our Privacy OpportunityGuest Info:Follow Victor on LinkedInFollow the SPL Show:Follow us on TwitterFollow us on LinkedInCheck out our websiteSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Feb 21, 2023 • 59min

S2E7: Bring Your Own Data, ChatGPT & Personal AIs with Markus Lampinen (Prifina)

In this conversation with Markus Lampinen, Co-founder and CEO at Prifina, a personal data platform, we discuss meaty topics like: Prifina’s approach to building privacy-respected apps for consumer wearable sensors; LLMs (Large Language Models) like Chat GPT; and why we should consider training our own personal AIs.Markus shares his entrepreneurial journey in the privacy world and how he is “the biggest data nerd you’ll find.” It started with tracking his own data, like his eating habits, activity, sleep, and stress, an then he built his company around that interest. His curiosity about what you can glean from one's own data made him wonder how you could also improve your life or the lives of your customers with that data.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------We discuss how to approach building a privacy-first platform to unlock the value and use of IOT / sensor data. It began with the concept of individual ownership: who should actually benefit from the data that we generate? Markus says it should be individuals themselves. Prifina boasts a strong community of 30,000 developers who align around common interests - liberty, equality & data - and build and test prototypes that are gathering and utilizing the data working for individuals, as opposed to corporate entities. The aim is to empower individuals, companies & developers to build apps that re-purpose individuals' own sensor data to gain privacy-enabled insights.---------Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.---------Topics Covered:Enabling true, consumer-grade 'data portability' with personal data clouds (a 'bring your own data' approach)Use cases to illustrate the problems Prifina is solving with sensorsWhat are large language models (LLM) and chatbots trained on them, and why they are so hot right nowThe dangers of using LLMs, with emphasis on privacy harmsHow to benefit from our own data with personal AIsAdvice to data scientists, researchers and developers regarding how to architect for ethical uses of LLMsWho's responsible for educating the public about LLMs, chatbots, and their potential harms & limitationsResources Mentioned:Learn more about PrifinaJoin Prifina's Slack Community: Liberty.Equality.DataGuest Info:Follow Markus on Send us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Feb 14, 2023 • 59min

S2E6: 'Privacy Left Trust' with Gary LaFever (Anonos)

Today, I welcome Gary LaFever, co-CEO & GC at Anonos; WEF Global Innovator; and a solutions-oriented futurist with a computer science and legal background. Gary has over 35 years of technical, legal and policy experience that enables him to approach issues from multiple perspectives. I last saw Gary when we shared the stage at a RegTech conference in London six years ago, and it was a pleasure to speak with him again to discuss how the Schrems II decision coupled with the increasing prevalence of data breaches and ransomware attacks have shifted privacy left from optional to mandatory, necessitating a "privacy left trust" approach.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------Gary describes the 7 Universal Data Use Cases with relatable examples and how they are applicable across orgs and industries, regardless of jurisdiction. We then dive into what Gary is seeing in the market in regard to the use cases. He then reveals the 3 Main Data Use Obstacles to accomplishing these use cases and how to overcome them with "statutory pseudonymization" and "synthetic data."In this conversation that evaluates how we can do business in a de-risked environment, we discuss why you can't approach privacy with just words - contracts, policies, and treaties; why it's essential to protect data in use;  and how you can embed technical controls that move with data for protection that meets regulatory thresholds while "in use" to unlock additional data use cases. I.e., these effective controls equate to competitive advantage.Topics Covered:Why trust must be updated to be technologically enforced - "privacy left trust"The increasing prevalence of data breaches and ransomware attacks and how they have shifted privacy left from optional to mandatory7 Data Use Cases, 3 Data Use Obstacles, and deployable technologies to unlock new data use casesHow the market is adopting technology for the 7 use cases and trends that Gary is seeingWhat it means to "de-risk" dataBeneficial uses of "variant twins" technologyBuilding privacy in by design, so it increases revenue generation"Statutory pseudonymization" and how it will help you reduce data privacy risks while increasing utility and valueResources Mentioned:Learn about AnonosRead: "Technical Controls that Protect Data When in Use and Prevent Misuse"Guest Info:Follow Gary on LinkedInFollow Gary on TwitterSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
undefined
Feb 7, 2023 • 59min

S2E5 - What's New in Privacy-by-Design with R. Jason Cronk (IOPD)

R. Jason Cronk is the Founder of the Institute of Operational Privacy Design (IOPD) and CEO of Enterprivacy Consulting Group, as well as the author of Strategic Privacy by Design. I recently caught up with Jason at the annual Privacy Law Salon event and had a conversation about the socio-technical challenges of privacy, different privacy-by-design frameworks that he’s worked on, and his thoughts on some hot topics in the web privacy space.---------Thank you to our sponsor, Privado, the developer-friendly privacy platform---------We start off discussing updates to  Strategic Privacy by Design, now in it's 2nd edition. We chat about the brand new ISO 31700 Privacy by Design for Consumer Goods and Services standard and consensus process and  compare it to the NIST Privacy Framework, IEEE 7002 Standard for Data Privacy, and Jason's work with the Institute of Operational Privacy Design (IOPD) and it's newly-published Design Process Standard v1. Jason and I also explore risk tolerance through the lens of privacy using FAIR. There’s a lot of room for subjective interpretation, particularly of non-monetary harm, and Jason provides many thought-provoking examples of how this plays out in our society. We round out our conversation by talking about the challenges of Global Privacy Control (GPC) and what deceptive design strategies to look out for.Topics Covered:Why we should think of privacy beyond "digital privacy"What readers can expect from Jason’s book,  Strategic Privacy by Design, and what’s included in the 2nd editionIOPD’s B2B third-party privacy auditWhy you should leverage the FAIR quantitative risk analysis model to define address effective privacy risk management programsThe NIST Privacy Framework and developments of its Privacy Workforce Working GroupDark patterns & why just asking the wrong question can be a privacy harm (interrogation)How there are 15 privacy harms & only 1 of them is about securityResources Mentioned:Learn about the ISO 31700 Privacy by Design StandardReview the IOPD Design Process Standard v1Guest Info:Follow Jason on LinkedInFollow Enterprivacy Consulting Group on TwitterSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnBuzzsprout - Launch your podcastDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode