80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Aug 14, 2023 • 2h 37min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah RitchieIn today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.Links to learn more, summary and full transcript.They cover:Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could getHer new book about how we could be the first generation to build a sustainable planetWhether climate change is the most worrying environmental issueHow we reduced outdoor air pollutionWhy Hannah is worried about the state of ​​biodiversitySolutions that address multiple environmental issues at onceHow the world coordinated to address the hole in the ozone layerSurprises from Our World in Data’s researchPsychological challenges that come up in Hannah’s workAnd plenty moreGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
undefined
Aug 7, 2023 • 2h 51min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Links to learn more, summary and full transcript.Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.”But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.How do you know that these technical problems can be solved at all, even in principle?At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do?In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:OpenAI's current plans to achieve 'superalignment' and the reasoning behind themWhy alignment work is the most fundamental and scientifically interesting research in MLThe kinds of people he’s excited to hire to join his team and maybe save the worldWhat most readers misunderstood about the OpenAI announcementThe three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversightWhat the standard should be for confirming whether Jan's team has succeededWhether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solvedWhether Jan thinks OpenAI has deployed models too quickly or too slowlyThe many other actors who also have to do their jobs really well if we're going to have a good AI futurePlenty moreGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
undefined
Aug 5, 2023 • 6min

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely.Get these highlight episodes by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Highlights put together by Simon Monsour and Milo McGuire
undefined
Jul 31, 2023 • 3h 14min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies.What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists.Holden’s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world.What the ML and AI safety communities get wrong in Holden's view.Ways we might succeed with AI just by dumb luck.The value of laying out imaginable success stories.Why information security is so important and underrated.Whether it's good to work at an AI lab that you think is particularly careful.The track record of futurists’ predictions.And much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
undefined
Jul 24, 2023 • 1h 19min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.Links to learn more, summary and full transcript.Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:They cover:Whether it's desirable to slow down AI researchThe value of engaging with current policy debates even if they don't seem directly importantWhich AI business models seem more or less dangerousTensions between people focused on existing vs emergent risks from AITwo major challenges of being a new parentGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
undefined
Jul 10, 2023 • 2h 7min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus AnderljungIn today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.Links to learn more, summary and full transcript.They cover:The need for AI governance, including self-replicating models and ChaosGPTWhether or not AI companies will willingly accept regulationThe key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoringWhether we can be confident that people won't train models covertly and ignore the licencing systemThe progress we’ve made so far in AI governanceThe key weaknesses of these approachesThe need for external scrutiny of powerful modelsThe emergent capabilities problemWhy it really matters where regulation happensAdvice for people wanting to pursue a career in this fieldAnd much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
undefined
Jun 30, 2023 • 35min

Bonus: The Worst Ideas in the History of the World

Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.If you have strong opinions about this one way or another, please email us at podcast@80000hours.org to help us figure out whether more of this ought to exist.Chapters:Rob’s intro (00:00:00)The Worst Ideas in the History of the World (00:00:51)My history with longtermism (00:04:01)Outlining the format (00:06:17)Will MacAskill’s basic case (00:07:38)5 reasons for why future people might not matter morally (00:10:26)Whether we can reasonably hope to influence the future (00:15:53)Great power wars (00:18:55)Nuclear weapons (00:22:27)Gain-of-function research (00:28:31)Closer (00:33:02)Rob's outro (00:35:13)
undefined
Jun 22, 2023 • 3h 13min

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.Links to learn more, summary and full transcript.As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.Lennart and Rob discuss the above as well as:How can we best categorise all the ways AI could go wrong?Why did the US restrict the export of some chips to China and what impact has that had?Is the US in an 'arms race' with China or is that more an illusion?What is the deal with chips specialised for AI applications?How is the 'compute' industry organised?Downsides of using compute as a target for regulationsCould safety mechanisms be built into computer chips themselves?Who would have the legal authority to govern compute if some disaster made it seem necessary?The reasons Rob doubts that any of this stuff will workCould AI be trained to operate as a far more severe computer worm than any we've seen before?What does the world look like when sluggish human reaction times leave us completely outclassed?And plenty moreChapters:Rob’s intro (00:00:00)The interview begins (00:04:35)What is compute exactly? (00:09:46)Structural risks (00:13:25)Why focus on compute? (00:21:43)Weaknesses of targeting compute (00:30:41)Chip specialisation (00:37:11)Export restrictions (00:40:13)Compute governance is happening (00:59:00)Reactions to AI regulation (01:05:03)Creating legal authority to intervene quickly (01:10:09)Building mechanisms into chips themselves (01:18:57)Rob not buying that any of this will work (01:39:28)Are we doomed to become irrelevant? (01:59:10)Rob’s computer security bad dreams (02:10:22)Concrete advice (02:26:58)Article reading: Information security in high-impact areas (02:49:36)Rob’s outro (03:10:38)Producer: Keiran HarrisAudio mastering: Milo McGuire, Dominic Armstrong, and Ben CordellTranscriptions: Katy Moore
undefined
Jun 9, 2023 • 3h 10min

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work. Links to learn more, summary and full transcript.He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including:What he sees as the strongest case both for and against slowing down the rate of progress in AI research.Why he disagrees with most other ML researchers that training a model on a sensible 'reward function' is enough to get a good outcome.Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is “could this contain a superintelligence.”That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don't know.That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what's going on with ML models, because they're different in important ways from every other case we might compare them to.Why he's optimistic about DeepMind’s work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves.Why he isn't inherently worried about a future where we're surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree.Why it's not enough for humanity to know how to align AI models — it's essential that management at AI labs correctly pick which methods they're going to use and have the practical know-how to apply them properly.Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don't crash; and universities have specific majors in particular subjects.Plenty more besides.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Milo McGuire, Dominic Armstrong, and Ben CordellTranscriptions: Katy Moore
undefined
Jun 2, 2023 • 2h 56min

#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year.Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently.Links to learn more, summary and full transcript.Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them.An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else.Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as:Why GiveWell flipped from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on themWhat transferable lessons GiveWell learned from investigating different kinds of interventionsWhy the best treatment for premature babies in low-resource settings may involve less rather than more medicine.Severe malnourishment among children and what can be done about it.How to deal with hidden and non-obvious costs of a programmeSome cheap early treatments that can prevent kids from developing lifelong disabilitiesThe various roles GiveWell is currently hiring for, and what's distinctive about their organisational cultureAnd much more.Chapters:Rob’s intro (00:00:00)The interview begins (00:03:14)GiveWell over the last couple of years (00:04:33)Dispensers for Safe Water (00:11:52)Syphilis diagnosis for pregnant women via technical assistance (00:30:39)Kangaroo Mother Care (00:48:47)Multiples of cash (01:01:20)Hidden costs (01:05:41)MiracleFeet (01:09:45)Serious malnourishment among young children (01:22:46)Vitamin A deficiency and supplementation (01:40:42)The subjective wellbeing approach in contrast with GiveWell's approach (01:46:31)The value of saving a life when that life is going to be very difficult (02:09:09)Whether economic policy is what really matters overwhelmingly (02:20:00)Careers at GiveWell (02:39:10)Donations (02:48:58)Parenthood (02:50:29)Rob’s outro (02:55:05)Producer: Keiran HarrisAudio mastering: Simon Monsour and Ben CordellTranscriptions: Katy Moore

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode